Close Menu
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

‘Hunger Games’ Screenwriter Billy Ray Warns ‘Existential Threat’ AI Is to Hollywood: ‘Bad Movies, Bad TV Shows, and a Lot of People Out of Work’

August 4, 2025

Bitcoin Demand Holds Strong Despite Price Drop: Accumulation Trend Remains Intact

August 4, 2025

Analyst Warns XRP Investors Not To Let Fear Dictate Moves As Long As Price Holds This Level

August 4, 2025
Facebook X (Twitter) Instagram
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
MNK NewsMNK News
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing
MNK NewsMNK News
Home » OpenAI’s new reasoning AI models hallucinate more
Finance

OpenAI’s new reasoning AI models hallucinate more

MNK NewsBy MNK NewsApril 19, 2025No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI’s recently launched o3 and o4-mini AI models are state-of-the-art in many respects. However, the new models still hallucinate, or make things up — in fact, they hallucinate more than several of OpenAI’s older models.

Hallucinations have proven to be one of the biggest and most difficult problems to solve in AI, impacting even today’s best-performing systems. Historically, each new model has improved slightly in the hallucination department, hallucinating less than its predecessor. But that doesn’t seem to be the case for o3 and o4-mini.

According to OpenAI’s internal tests, o3 and o4-mini, which are so-called reasoning models, hallucinate more often than the company’s previous reasoning models — o1, o1-mini, and o3-mini — as well as OpenAI’s traditional, “non-reasoning” models, such as GPT-4o.

Perhaps more concerning, the ChatGPT maker doesn’t really know why it’s happening.

In its technical report for o3 and o4-mini, OpenAI writes that “more research is needed” to understand why hallucinations are getting worse as it scales up reasoning models. O3 and o4-mini perform better in some areas, including tasks related to coding and math. But because they “make more claims overall,” they’re often led to make “more accurate claims as well as more inaccurate/hallucinated claims,” per the report.

OpenAI found that o3 hallucinated in response to 33% of questions on PersonQA, the company’s in-house benchmark for measuring the accuracy of a model’s knowledge about people. That’s roughly double the hallucination rate of OpenAI’s previous reasoning models, o1 and o3-mini, which scored 16% and 14.8%, respectively. O4-mini did even worse on PersonQA — hallucinating 48% of the time.

Third-party testing by Transluce, a nonprofit AI research lab, also found evidence that o3 has a tendency to make up actions it took in the process of arriving at answers. In one example, Transluce observed o3 claiming that it ran code on a 2021 MacBook Pro “outside of ChatGPT,” then copied the numbers into its answer. While o3 has access to some tools, it can’t do that.

“Our hypothesis is that the kind of reinforcement learning used for o-series models may amplify issues that are usually mitigated (but not fully erased) by standard post-training pipelines,” said Neil Chowdhury, a Transluce researcher and former OpenAI employee, in an email to TechCrunch.

Sarah Schwettmann, co-founder of Transluce, added that o3’s hallucination rate may make it less useful than it otherwise would be.

Kian Katanforoosh, a Stanford adjunct professor and CEO of the upskilling startup Workera, told TechCrunch that his team is already testing o3 in their coding workflows, and that they’ve found it to be a step above the competition. However, Katanforoosh says that o3 tends to hallucinate broken website links. The model will supply a link that, when clicked, doesn’t work.

Hallucinations may help models arrive at interesting ideas and be creative in their “thinking,” but they also make some models a tough sell for businesses in markets where accuracy is paramount. For example, a law firm likely wouldn’t be pleased with a model that inserts lots of factual errors into client contracts.

One promising approach to boosting the accuracy of models is giving them web search capabilities. OpenAI’s GPT-4o with web search achieves 90% accuracy on SimpleQA, another one of OpenAI’s accuracy benchmarks. Potentially, search could improve reasoning models’ hallucination rates, as well — at least in cases where users are willing to expose prompts to a third-party search provider.

If scaling up reasoning models indeed continues to worsen hallucinations, it’ll make the hunt for a solution all the more urgent.

“Addressing hallucinations across all our models is an ongoing area of research, and we’re continually working to improve their accuracy and reliability,” said OpenAI spokesperson Niko Felix in an email to TechCrunch.

In the last year, the broader AI industry has pivoted to focus on reasoning models after techniques to improve traditional AI models started showing diminishing returns. Reasoning improves model performance on a variety of tasks without requiring massive amounts of computing and data during training. Yet it seems reasoning also may lead to more hallucinating — presenting a challenge.

This article originally appeared on TechCrunch at https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-models-hallucinate-more/



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
MNK News
  • Website

Related Posts

Rite Aid files for bankruptcy — again

May 6, 2025

How to Track Driver Performance Without Micromanaging

May 6, 2025

Ford says its Q1 profit fell by two-thirds and it expects a $1.5 billion hit from tariffs this year

May 6, 2025
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Tekken GOAT Arslan Ash bags 6th EVO title at Las Vegas showdown against fellow Pakistani Atif Butt – Pakistan

August 4, 2025

McLaughlin-Levrone, Russell book world championship berths – Sport

August 4, 2025

McIntosh signs off from stellar world championships with fourth gold – Sport

August 4, 2025

Pakistan clinch series win 2-1 after defeating West Indies by 13 runs – Sport

August 3, 2025
Our Picks

Bitcoin Demand Holds Strong Despite Price Drop: Accumulation Trend Remains Intact

August 4, 2025

Analyst Warns XRP Investors Not To Let Fear Dictate Moves As Long As Price Holds This Level

August 4, 2025

Bitcoin Investors Selling More Aggressively As Bull Cycle Matures: Risk Appetite Fades?

August 4, 2025

Recent Posts

  • ‘Hunger Games’ Screenwriter Billy Ray Warns ‘Existential Threat’ AI Is to Hollywood: ‘Bad Movies, Bad TV Shows, and a Lot of People Out of Work’
  • Bitcoin Demand Holds Strong Despite Price Drop: Accumulation Trend Remains Intact
  • Analyst Warns XRP Investors Not To Let Fear Dictate Moves As Long As Price Holds This Level
  • Spotify is raising prices for international customers
  • Tesla’s Brand Loyalty Plummets, Proving Leftists Are the Overwhelming Market for EVs

Recent Comments

No comments to show.
MNK News
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 mnknews. Designed by mnknews.

Type above and press Enter to search. Press Esc to cancel.