← Back to all posts

How to Deal With AI When It Confidently Gets Things... TOTALLY WRONG

by Elena Jäger
Aug 24, 2025
Connect

🕓 Read time: ~3 min

Have you ever had AI give you an answer that really sounded smart but turned out to be completely false? I guess we can all relate, right?
That’s not just a bug. It’s something called an AI hallucination.

When people say AI is “hallucinating,” they don’t mean it’s seeing purple elephants. It means the system gave you output that looks accurate, sounds confident… but isn’t true.

Why does AI hallucinate? 

Because AI doesn’t know things. It doesn’t really check facts or understand meaning (even if it pretends to know it all). It simply predicts what text is likely to come next, based on patterns in massive data sets.

So when it hits a gap such as an unclear question, niche topic, missing info, it will likely invent a detail to keep the conversation flowing.

It’s not trying to deceive you. It’s doing what it was built to do: predict text that feels right. But that “feels right” output can still be dangerously wrong, especially if it’s presented with confidence.

Remember studying probabilities in high school? Now you finally know what those are good for 🤓

When to hallucinations happen?

You’ll most often encounter hallucinations when:

  • Your prompt is vague and lacks context

  • You provide the AI with complex or conflicting objectives

  • You ask it to generate highly specific facts or insights without providing actual data input

In short, hallucinations in AI are caused by a combination of training data quality, model design, prompt clarity, and the inherent limitation of generating outputs based on learned probabilities rather than true understanding or verification.

💡 How to Limit Hallucinations: 4 Practical Tips


1. Give Clear, Specific Prompts

Garbage in = garbage out. The more generic your prompt, the more AI fills in the blanks with guesses. Be specific about what you want, and for whom (=your audience). Include context, insights, perspectives as well as desired output format, tone. If possible, add examples.

Instead of: “Write a sales email for a new training course.”
Try: “Write a 3-paragraph email inviting executive coaches to a live AI training titles "Awesome Sales Training", covering topics ABC. Keep it warm, clear, and benefits-focused.”

2. Use “Validation” Prompts to Check AI’s Thinking
Before you trust the output, try the following prompts:

  • “If you’re making assumptions, say so.”

  • “Double-check your response for accuracy.” (use a reasoning model for extra power!)

  • “Only use the information I’ve given you, don’t add new facts.”

3. Ask AI to Reveal its Sources
Instruct the AI to reveal it's sources. Then make sure to check them. Just because it includes a link doesn’t mean it’s real or accurate.

4. Review, Verify, and Edit Before Sharing
Even a great-sounding answer can be wrong. Scan for errors, especially in stats, names, or sensitive topics. If something feels off, trust your gut and double-check.

 

Key Takeaway:
AI hallucinations happen but they don’t have to derail you. Better prompts (context, perspective and insight!), and a default review step make all the difference.

 

Remember: garabge in = garbage out. You won't be able to avoid hallucinations altogether but with the right context and specific prompts, you will be able to limit them significantly.

Til next time,

Elena

The brief AI never forgets.
  ⏱️ Read time: ~4 min You have probably re-explained the same task to AI more times than you can count. Same instructions, different conversation, same frustration. There is a better way. Last time I shared two prompts to get more from AI. Today we go deeper. Because before you can use Skills well, you need to understand what they actually are. So, what is a Skill? A Skill is a ...
2 Prompts. Less Time. Better AI Output
⏱️ Read time: ~3 min Today I'm sharing two prompts that change how you work with AI, and one of them is brand new. One of the most common frustrations I hear from coaches and consultants about AI goes something like this: "I tried it, the output was terrible, and I don't see the point." And almost every time, the real problem isn't the AI. It's the user input. AI won't fix a lack of clarity. It...
On Curiosity, Claude, And Knowing When To Stop
⏱️ Read time: ~4 min I almost lost an entire evening to agentic dashboards. There I was, using Claude Code to generate dashboards on random topics, then building landing pages, then exploring what else it could do. One thing led to another, and before I knew it, hours had passed. I, the person who preaches pragmatic AI use, had completely abandoned my usual discipline. 🤯 And honestly?...

Not signed up yet?
Do it right here:

© 2026 Future of Work
Privacy Policy Home

JOIN THE VIP LIST

Name of Free Resource

Get started today before this once in a lifetime opportunity expires. Get started today before this once in a lifetime opportunity expires.