Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

usonian

(20,618 posts)
Mon Sep 22, 2025, 12:23 AM 11 hrs ago

OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws

Oops, no “Well, that’s a tiny bug that we’ll fix tonight”

https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html

OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.

The study, published on September 4 and led by OpenAI researchers Adam Tauman Kalai, Edwin Zhang, and Ofir Nachum alongside Georgia Tech’s Santosh S. Vempala, provided a comprehensive mathematical framework explaining why AI systems must generate plausible but false information even when trained on perfect data.

“Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty,” the researchers wrote in the paper. “Such ‘hallucinations’ persist even in state-of-the-art systems and undermine trust.”

The admission carried particular weight given OpenAI’s position as the creator of ChatGPT, which sparked the current AI boom and convinced millions of users and enterprises to adopt generative AI technology.


Paper here: https://arxiv.org/pdf/2509.04664

Opinion:

So, “Let’s use this fundamentally flawed system for critical work.”

A Harvard report found a few obstacles to filtering hallucinations, trivial things like “ budget, volume, ambiguity, and context sensitivity”

They had me at “budget”

💰
7 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws (Original Post) usonian 11 hrs ago OP
Message auto-removed Name removed 11 hrs ago #1
Once an AI creates an hallucination, will other AI programs see it and use it as if it was actual truth? patphil 11 hrs ago #2
Infallibility Ilsa 11 hrs ago #3
Do they mean like multi-purpose AI like ChatGPT, or does he mean this is true of AZJonnie 9 hrs ago #4
Check the paper. usonian 5 hrs ago #5
Just like any other technological endeavor... Hugin 4 hrs ago #7
My "unqualified" opinion is that it will take another technology leap to fix hallucinations and make AI mainstream Mr. Sparkle 5 hrs ago #6

Response to usonian (Original post)

patphil

(8,276 posts)
2. Once an AI creates an hallucination, will other AI programs see it and use it as if it was actual truth?
Mon Sep 22, 2025, 12:39 AM
11 hrs ago

Could we slowly see a trend toward an AI created electronic reality that is clearly distinct from the one we live in?
How could this affect the world we actually live in, in terms of our being able to trust the information the AI gathers?
Could this eventually lead to a self-conscious AI?

Just a few thoughts on how strange this new world of AI could be.

Ilsa

(63,306 posts)
3. Infallibility
Mon Sep 22, 2025, 12:42 AM
11 hrs ago

"...large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty,"

I might be more inclined to check a statement coming from a person vs second guessing AI results

AZJonnie

(1,569 posts)
4. Do they mean like multi-purpose AI like ChatGPT, or does he mean this is true of
Mon Sep 22, 2025, 02:44 AM
9 hrs ago

ANY AI, no matter how specifically and thoroughly trained? Like an AI drone? Or an AI watching over life support monitoring at a hospital?

If true in both cases, and he has solid evidence he's mathematically irrefutably correct? The fallout from this being factual could be massive to the industry.

But it probably won't be because it's probably still statistically making decisions better than people would on average, so people will be convinced it's all fine.

usonian

(20,618 posts)
5. Check the paper.
Mon Sep 22, 2025, 06:25 AM
5 hrs ago

I think the problem will boil down to:

No matter how good the results are, they always need checking, and that costs money, and that means some hallucinations will creep through.

This is not like a mathematical library where results have been tested like crazy and can be relied upon.

They don't guess.

Hugin

(36,880 posts)
7. Just like any other technological endeavor...
Mon Sep 22, 2025, 07:55 AM
4 hrs ago

Generative AI suffers from the potential for so-called gear clash. In fact, they’ve enshrined it at the board room level.

Entropy is a cruel mistress.

Mr. Sparkle

(3,491 posts)
6. My "unqualified" opinion is that it will take another technology leap to fix hallucinations and make AI mainstream
Mon Sep 22, 2025, 06:50 AM
5 hrs ago

right now you would not trust it to make important decisions that can have a big financial impact on your life.

Latest Discussions»General Discussion»OpenAI admits AI hallucin...