How to Deal With AI When It Confidently Gets Things... TOTALLY WRONG
🕓 Read time: ~3 min
Have you ever had AI give you an answer that really sounded smart but turned out to be completely false? I guess we can all relate, right?
That’s not just a bug. It’s something called an AI hallucination.
When people say AI is “hallucinating,” they don’t mean it’s seeing purple elephants. It means the system gave you output that looks accurate, sounds confident… but isn’t true.
Why does AI hallucinate?
Because AI doesn’t know things. It doesn’t really check facts or understand meaning (even if it pretends to know it all). It simply predicts what text is likely to come next, based on patterns in massive data sets.
So when it hits a gap such as an unclear question, niche topic, missing info, it will likely invent a detail to keep the conversation flowing.
It’s not trying to deceive you. It’s doing what it was built to do: predict text that feels right. But that “feels right” output can still be dangerously wrong, especially if it’s presented with confidence.
Remember studying probabilities in high school? Now you finally know what those are good for 🤓
When to hallucinations happen?
You’ll most often encounter hallucinations when:
-
Your prompt is vague and lacks context
-
You provide the AI with complex or conflicting objectives
-
You ask it to generate highly specific facts or insights without providing actual data input
In short, hallucinations in AI are caused by a combination of training data quality, model design, prompt clarity, and the inherent limitation of generating outputs based on learned probabilities rather than true understanding or verification.
💡 How to Limit Hallucinations: 4 Practical Tips
1. Give Clear, Specific Prompts
Garbage in = garbage out. The more generic your prompt, the more AI fills in the blanks with guesses. Be specific about what you want, and for whom (=your audience). Include context, insights, perspectives as well as desired output format, tone. If possible, add examples.
Instead of: “Write a sales email for a new training course.”
Try: “Write a 3-paragraph email inviting executive coaches to a live AI training titles "Awesome Sales Training", covering topics ABC. Keep it warm, clear, and benefits-focused.”
2. Use “Validation” Prompts to Check AI’s Thinking
Before you trust the output, try the following prompts:
-
“If you’re making assumptions, say so.”
-
“Double-check your response for accuracy.” (use a reasoning model for extra power!)
-
“Only use the information I’ve given you, don’t add new facts.”
3. Ask AI to Reveal its Sources
Instruct the AI to reveal it's sources. Then make sure to check them. Just because it includes a link doesn’t mean it’s real or accurate.
4. Review, Verify, and Edit Before Sharing
Even a great-sounding answer can be wrong. Scan for errors, especially in stats, names, or sensitive topics. If something feels off, trust your gut and double-check.
Key Takeaway:
AI hallucinations happen but they don’t have to derail you. Better prompts (context, perspective and insight!), and a default review step make all the difference.
Remember: garabge in = garbage out. You won't be able to avoid hallucinations altogether but with the right context and specific prompts, you will be able to limit them significantly.
Til next time,
Elena