Jump to content
Facebook Twitter Youtube

[Software] The surprising reason ChatGPT and other AI tools make things up – and why it’s not just a glitch


Recommended Posts

Posted

EDMONTON, CANADA - FEBRUARY 10: A woman uses a cell phone displaying the Open AI logo, with the same logo visible on a computer screen in the background, on February 10, 2025, in Edmonton, Canada

Large language models (LLMs) like ChatGPT have wowed the world with their capabilities. But they’ve also made headlines for confidently spewing absolute nonsense.

This phenomenon, known as hallucination, ranges from fairly harmless mistakes – like getting the number of ‘r’s in strawberry wrong – to completely fabricated legal cases that have landed lawyers in serious trouble.

Sure, you could argue that everyone should rigorously fact-check anything AI suggests (and I’d agree). But as these tools become more ingrained in our work, research, and decision-making, we need to understand why hallucinations happen – and whether we can prevent them.

To understand why AI hallucinates, we need a quick refresher on how large language models (LLMs) work.

LLMs don’t retrieve facts like a search engine or a human looking something up in a database. Instead, they generate text by making predictions.

“LLMs are next-word predictors and daydreamers at their core,” says software engineer Maitreyi Chatterjee. “They generate text by predicting the statistically most likely word that occurs next.”

We often assume these models are thinking or reasoning, but they’re not. They’re sophisticated pattern predictors – and that process inevitably leads to errors.

This explains why LLMs struggle with seemingly simple things, like counting the ‘r’s in strawberry or solving basic math problems. They’re not sitting there working it out like we would – not really.

Another key reason is they don’t check what they’re pumping out. “LLMs lack an internal fact-checking mechanism, and because their goal is to predict the next token [unit of text], they sometimes prefer lucid-sounding token sequences over correct ones,” Chatterjee explains.

And when they don’t know the answer? They often make something up. “If the model’s training data has incomplete, conflicting, or insufficient information for a given query, it could generate plausible but incorrect information to ‘fill in’ the gaps,” Chatterjee tells me.

Rather than admitting uncertainty, many AI tools default to producing an answer – whether it’s right or not. Other times, they have the correct information but fail to retrieve or apply it properly. This can happen when a question is complex, or the model misinterprets context.

This is why prompts matter.

The hallucination-smashing power of prompts
Certain types of prompts can make hallucinations more likely. We’ve already covered our top tips for leveling up your AI prompts. Not just for getting more useful results, but also for reducing the chances of AI going off the rails.

For example, ambiguous prompts can cause confusion, leading the model to mix up knowledge sources. Chatterjee says this is where you need to be careful, ask “Tell me about Paris” without context, and you might get a strange blend of facts about Paris, France, Paris Hilton, and Paris from Greek mythology.

But more detail isn’t always better. Overly long prompts can overwhelm the model, making it lose track of key details and start filling in gaps with fabrications. Similarly, when a model isn’t given enough time to process a question, it’s more likely to make errors. That’s why techniques like chain-of-thought prompting – where the model is encouraged to reason through a problem step by step – can lead to more accurate responses.

Providing a reference is another effective way to keep AI on track. “You can sometimes solve this problem by giving the model a ‘pre-read’ or a knowledge source to refer to so it can cross-check its answer,” Chatterjee explains. Few-shot prompting, where the model is given a series of examples before answering, can also improve accuracy.

Even with these techniques, hallucinations remain an inherent challenge for LLMs. As AI evolves, researchers are working on ways to make models more reliable. But for now, understanding why AI hallucinates, how to prevent it, and, most importantly, why you should fact-check everything remains essential.

Link: https://www.techradar.com/computing/artificial-intelligence/the-surprising-reason-chatgpt-and-other-ai-tools-make-things-up-and-why-its-not-just-a-glitch

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

WHO WE ARE?

CsBlackDevil Community [www.csblackdevil.com], a virtual world from May 1, 2012, which continues to grow in the gaming world. CSBD has over 70k members in continuous expansion, coming from different parts of the world.

 

 

Important Links