AI but make it stop hallucinating

Stack the odds in your favor

You & AI participant asked how she could get ChatGPT to stop hallucinating. The answer is, you can’t. But you can tame its visions, and today I’m sharing strategies to do so.

Thanks for reading Coauthored, by Sarah Dillard! Subscribe for free to receive new posts.

Ways to reduce hallucinations:

  • If AI hallucinates at the beginning of a chat, delete the chat & start a new thread. Starting fresh reduces the chance the hallucination will persist or ripple, and since it’s the beginning of a chat, you’re not losing token memory by starting over.

  • Break the task into parts. Instead of a single complex request, ask the LLM to complete fewer steps at a time. A smaller prompt scope makes it less likely the model will overreach to fill in gaps, and at each step, you can audit its accuracy.

  • Use Project Instructions to steer behavior. Because I am often writing emails to parents about education, ChatGPT has hallucinated parenting advice several times. My relevant Project Instructions now include a directive not to include parenting advice in any emails (and it has not since I added it).

  • Give context through Project Knowledge. Adding relevant context makes the LLM less likely to make up information to fill in gaps in its understanding.

  • Manage Persistent Memory to shape what the LLM does when it sees ambiguity.On the margin, you can influence the model to tell you when it senses multiple answers of similar likelihood, rather than simply selecting one and pretending it’s right.

  • Ask the model to critique its own output. I especially do this when I’m working in a domain where my background knowledge is low, since it’s harder for me to spot hallucinations there.

  • Select the right model. For example, for more complex reasoning, including visual reasoning, you want to select o3 and not 4o.

  • Switch tools, if relevant. As we discuss in the knowledge partner session of You & AI, NotebookLM is a better tool for tracing sources than ChatGPT is, for example.

More improvements on the horizon

  • Hallucinations are a well-known problem researchers are working to address. ChatGPT and Claude’s current models already hallucinate much less than earlier versions.

  • Hybrid systems hallucinate less. ChatGPT now routes some tasks—like math or data analysis—through more reliable tools for the job like Python. Similarly, while ChatGPT doesn’t have a calendar integration yet, Claude recently added one, which means it can read your calendar without hallucinating. I am running an upcoming session on hybrid routing if you are interested in learning more.

  • Better integrations are coming. Expect more hybrid workflows and tighter integration with tools like Google Workspace, which should improve reliability.


Reducing hallucinations isn’t about one thing. It’s about developing ways of thinking, checking, and collaborating with AI that make your output more trustworthy, even when the system itself isn’t. Treating AI as infallible guarantees disappointment. But treating it as a reasoning partner—one you guide, audit, and challenge—turns it into a worthy one.

Coauthored index: Sarah 90% | AI 10%

Previous
Previous

AI but build me a website

Next
Next

AI but draw me a map