REALITY CHECK

Why do LLM’s hallucinate?

This week, I was forced to skip running because of colds and coughs. On paper, it looked like I had all the systems ready to go: training schedule lined up, shoes by the door, and even the mindset to push through. But when the body is drained, lungs tight, muscles heavy, forcing a run doesn’t bring performance, it brings struggle. What comes out looks like running, but it isn’t my real form; it’s shaky, off-tempo, a placeholder until I can recover.


Struggling, like an LLM hallucinating, both body and mind can drift.
The question is: how do you pull yourself back?

Large Language Models hallucinate in much the same way. These models don’t “know” the truth. They are pattern machines, predicting the next word based on massive amounts of past data. When the training data is rich and aligned with the question, they perform beautifully. But when there’s a gap, like a runner trying to train sick, the model still pushes forward, filling in blanks with words that sound right but lack grounding. The illusion is smooth, but the accuracy can crumble.

It’s not what happens to you, but how you react to it that matters.

Epictetus

In running, recovery is the cure. Rest brings the body back in sync with the training plan. In neural networks, grounding is the answer. That means connecting the model to external tools: retrieval systems, verified databases, or even human oversight. Just like I can’t rely on a strained body to carry me through long miles, we can’t rely on an ungrounded model to give us solid answers.


One of my favorite movies, The Matrix. Just like LLMs hallucinate, sometimes what we ‘see’ in running isn’t the real picture: it’s the illusion of struggle versus the truth of endurance.

The bigger picture? Both running and neural networks teach humility. You can’t always trust the appearance of motion, sometimes you have to pause, check the signals, and rebuild the foundation. Only then does progress become real.


OpenAI’s study reminds us: sometimes ‘I don’t know’ is the strongest answer. Agree?

Interestingly, OpenAI just published a new study (September 4, 2025) explaining why this happens. Hallucinations often come from how models are trained and evaluated: instead of being rewarded for saying “I don’t know,” they’re pushed to guess, even if the guess is wrong. The study notes that even advanced systems like GPT-5 still hallucinate, less than before, but the problem isn’t gone.

Here’s the link for further reading:

https://openai.com/index/why-language-models-hallucinate/

Like running, it’s not just about pushing harder or faster: it’s about training smart, knowing limits, and valuing truth over appearances.

Categories AI, AI Journey, running, UncategorizedTags , , , , , , , ,

Leave a comment

Design a site like this with WordPress.com
Get started
search previous next tag category expand menu location phone mail time cart zoom edit close