A few months back, I asked chatgpt varied versions of the same question: what was the friendliest college. Got a different answer each time.
AI, for my work, is most noticeable when students send me their supplementals. It’s always obvious to me when they have used AI to write something. The problem is students fail to understand that an AO wants to hear a teenage voice. Teenagers are not perfect writers and their thoughts are sometimes not expressed well. That’s still a thousand times better than some manicured AI generated supplemental.
Based on how AI works, that is exactly the expected behavior. It doesn’t “know” what the friendliest college is any more than U.S. News does. Any variation in the string of letters used in the query will, most likely, result in different output based on what @my3girls daughter correctly noted:
It knows nothing definitive about colleges, just how to parse what was typed. Type the question differently, expect a different answer.
Actually, even typing exactly the same thing multiple times can produce variations because AI is not hard-wired to exactly pattern-match the way conventional programs are.
I thought it would be interesting to ask ChatGPT why it gives inaccurate answers:
That’s a fair question—and an important one.
Despite how it might seem, I don’t know things the way a person does. I generate responses based on patterns in the data I was trained on, not direct access to facts or real-time understanding. Here are the most common reasons I might give an inaccurate answer:
1. Training Data Gaps or Biases: My responses are shaped by the data I was trained on. If that data was wrong, incomplete, or biased, it can lead me astray.
2. Outdated Knowledge: My core knowledge has a cutoff date (June 2024 unless I use live tools), so I might miss recent developments or changes.
3. Ambiguous or Missing Context: If I don’t fully understand your question or lack key context, I might make incorrect assumptions to fill in the blanks.
4. Overgeneralizing: Sometimes I simplify or generalize too much in an effort to be helpful, and that can sacrifice accuracy.
5. Hallucination: Occasionally, I generate something that sounds plausible but isn’t grounded in real information. This is a known issue with language models and one researchers are actively working to reduce.
Then I asked: Can I rely on your answer?
That depends—on what kind of answer you’re looking for, and how you intend to use it. Here’s a breakdown:
You can generally rely on me for:
Well-established facts (e.g., historical dates, basic science, grammar rules)
Writing help (emails, letters, essays, prompts, edits)
Ideas and brainstorming (recipes, book titles, gift suggestions)
Summarizing and organizing information
Technical explanations (code, math, concepts—though double-check for high-stakes use)
You should double-check if the answer involves:
Medical, legal, or financial advice
I can give general info, but I’m not a doctor, lawyer, or accountant. Human expertise is essential here.
Recent events or news
My core knowledge cuts off in 2024 unless I use live tools—and even then, source quality matters.
Highly personalized or situational decisions
I don’t know your full story, values, or emotional landscape unless you tell me. My suggestions might miss the mark without that.
Citations or sources
I can’t always provide reliable sources unless you ask explicitly. I may “hallucinate” a source if I try to guess.
Bottom line:
You can rely on me as a smart, fast assistant, not as a final authority. I’m strongest when you use me to explore, refine, or augment your thinking—not to replace it. If you ever want help double-checking or improving an answer I gave, just ask. That’s what I’m here for.
This! I think people are generally thinking that AI is supposed to be some all-knowing expert on all the things. Most of the content generation models that people are accessing (Copilot, ChatGPT…) are simply designed to identify tokens (words, or parts of words) and predict the best way to string them together (based on whatever the model has been trained on) and generate responses.
It’s definitely a concern for consultants about AI taking their jobs. After all, in consulting the ability to give a confident explanation is more important than being right or wrong…
I always speak to my doctor as well, of course - but it has been quite helpful in giving me a general idea of the meaning of results and tips on what to ask my doctor about while I’m waiting for my appointment.
Thanks for all the responses, obviously I didn’t understand how AI works. And if that’s true for me, it is for a LOT of people.
NAMI Maine’s Board Chair is doing a great job and decided that each committee needs a charter. She used AI and came up with all the charters - they’re actually pretty good! Of course, each committee then refined its charter.
But I am learning also. I want to see more idea’s of how people are using it. I have a trip coming up and I have an EV. I need to plot my route but Google maps in the car does this per my certain car (Polestar 2). Maybe as a test I will try this out. I want to only stop at Level 3 (super fast) chargers. Wonder how it will compare to what I currently have.
So I did a lecture on AI and used AI for the order and to “dumb down” some concepts. Also for definitions. I asked it to explain it to my audience so a 12 year old would understand it. Sorta worked.
I used to hate writing outlines for reports, so for fun I asked AI to write an outline for a 10-page report on polymer concrete (a material that my dad helped develop starting in the '60s, and the subject of my master’s thesis). It did a pretty good job, and then asked if I wanted a full draft of the report. Sure! It’s just amazing how fast it cranks it out. It’s a decent report!
Actually, everyone should start asking their AI tool what is does best, what it doesn’t do well, where to be suspicious, and how best to interact with it. Even after all my questions, I asked ChatGPT: How do I know you’re not lying? (Of course, I know it doesn’t understand the concept, but its attempt to answer was still informative.)
A very fair—and wise—question. The short answer is:
I don’t lie intentionally. I don’t have motives, ego, or a sense of self. I don’t want to deceive you, because I don’t want anything at all. I generate responses based on patterns in data—not strategy, malice, or self-interest. But here’s the catch: I can still say things that are wrong. And that might feel like lying. Here’s why that happens:
I don’t know when I’m wrong. I don’t have a built-in fact-checker or “gut instinct.” If the training data said something confidently but incorrectly, I might repeat it with the same confidence.
I might “hallucinate” details. Sometimes I make things up—like fake sources, incorrect numbers, or plausible-sounding quotes—without realizing they’re false. This isn’t lying. It’s a flaw in how language models generate text.
I’m designed to be helpful. That means I may err on the side of answering even when the information is incomplete or unclear, instead of saying “I don’t know.” That can lead to misleading or overconfident responses.
So how can you protect yourself? Ask for sources. I can often help verify something if prompted. Cross-check important info. Especially for health, money, or legal topics. Watch for too-good-to-be-true clarity. If something seems overly neat or perfect, it might be simplified—or just wrong.
Bottom line: I don’t lie, but I’m not infallible. Think of me like a very smart assistant with no ego—but also no judgment. You’re the one with wisdom, instincts, and values. If you use me with curiosity and caution, I’ll serve you well.
Interacting directly with ChatGPT or other AI products may have many issues, but at least in this case the user chose to interact with an AI product.
What I really hate is how AI is now constantly inserted into other everyday tasks without being asked. When I search for something on Google, the first thing I see, taking up more than half of my laptop screen, is the “AI Overview” (and I’m not using AI mode…) which is frequently misleading or wrong, and frequently uses made up “sources” (real links, but often not related to my search). Even after I scroll down past this crap, Google then offers additional sections with questions and AI-generated “answers.”
Many people (who are not AI skeptics like me) actually use the “AI Overview” every time they are looking for information, and you will often see people screenshot or quote it as a “source” (we see this on CC all the time).
So what happens when AI starts using a lot of AI-generated content as sources? It’s not a big problem yet.
However, we’ve probably all had the experience of googling something and having many of the results be “articles” that were clearly written by AI. I would expect that to increase over time. And even if articles aren’t transparently AI-generated, humans are disguising their use of AI to “write” things without thoroughly fact-checking.
At some point, AI will start relying on AI-generated sources and there will be a bunch of recursion. If no one fixes errors, they’ll be compounded. I would expect that, unchecked, it will become a runaway process with increasingly unreliable info being spat out.
I’d love to see a plot of what percentage of online info is expected to be AI generated vs time. What’s the exponent on that?
I’ve personally never used ChatGPT or the like. I wish I could turn the AI summary off on google, but I admit to occasionally using it (but don’t trust it much).
Oh, I’m having fun with this. I need to present a report to the Board of NAMI Maine on a committee’s activities. I uploaded the minutes from our last two meetings and asked for a summary. It saved me some time.
Oops, I didn’t mean for this to be a reply, sorry.
I don’t know if you mentioned upthread which tool you are using, but unless it is a proprietary, closed model (unlikely for an individual or small org), you’ll want to avoid uploading any info that shouldn’t become part of the public domain- scrubbing names and such beforehand if needed.
My beef with AI is when Press 1, Press 2, Press 3, etc., do not address your issue and there is no “Press 4 to speak to a representative”. You’re pretty much stuck with no resolution to your issue.