• Home
  • Next Gen Gadgets for ME
  • Middle Eastern Startup Ecosystem
  • FutureTech in ME
  • Reports
  • Home
  • Next Gen Gadgets for ME
  • Middle Eastern Startup Ecosystem
  • FutureTech in ME
  • Reports
Home Artifical Intelligence

OpenAI’s Latest Models Hallucinate More: A Step Back for AI Reliability

by Leslie Finecountry
April 20, 2025
in Artifical Intelligence
Reading Time: 2 mins read
open ai web logo
Share on FacebookShare on Twitter

In a surprising development, OpenAI’s newest o3 and o4-mini AI models hallucinate more frequently than their predecessors reversing the expected trend of improving accuracy.

The Hallucination Problem

OpenAI’s internal testing shows concerning results:

  • The o3 model hallucinated on 33% of questions about people—double the rate of previous models
  • The o4-mini performed worse, hallucinating 48% of the time
  • Even OpenAI admits it doesn’t fully understand why this is happening

Third-party testing by Transluce confirmed these issues, finding examples where o3 fabricated processes it claimed to have used, such as running code on external devices.

Why This Matters

This regression in factual reliability creates significant challenges for industries requiring accuracy:

  • Legal firms can’t risk models inserting errors into contracts
  • Financial institutions need reliable analysis without fabricated data
  • Healthcare applications demand extremely high levels of accuracy

Even in areas where the models excel, problems persist. Workera CEO Kian Katanforoosh reports that while o3’s coding capabilities are impressive, it regularly generates broken website links.

The Reasoning Model Trade-off

The industry has pivoted to “reasoning models” as traditional approaches showed diminishing returns. These models improve performance without requiring massive computing resources but appear to make more claims overall—both accurate and inaccurate ones.

Potential Solutions

OpenAI is exploring several approaches:

  • Web search integration (GPT-4o with search achieves 90% accuracy on some benchmarks)
  • Specialized training techniques to reduce hallucinations

“Addressing hallucinations across all our models is an ongoing area of research,” said OpenAI spokesperson Niko Felix.

What’s Next

If scaling reasoning models continues to worsen hallucinations, finding solutions becomes increasingly urgent. For now, users should maintain appropriate skepticism about factual claims and implement verification processes—particularly for critical applications.

#OpenAI #AIHallucinations #TechNews #AIReliability

Tags: AIAI InnovationArtificial IntelligenceChatGPTOpenAI
Advertisement Advertisement Advertisement
ADVERTISEMENT
Previous Post

Meta Begins AI Training with Public User Content in EU, But You Can Opt-Out

Next Post

OpenAI Expresses Interest in Acquiring Chrome as Google Faces Potential Breakup

Recommended For You

UAE Teams with Google to Train Entire Nation in AI Skills by 2026
Artifical Intelligence

UAE Teams with Google to Train Entire Nation in AI Skills by 2026

by Faith Amonimo
November 4, 2025
0

The UAE will train millions of residents in artificial intelligence through a free nationwide initiative with Google that rolls out next year. The "AI for All" program targets everyone from...

Read moreDetails

AWS Deal Marks Bold Compute Push for OpenAI

November 4, 2025
job cuts

AI Job Cuts Increase, But Real Workforce Impact Remains Limited

November 3, 2025
Intel Rebounds as AI Demand Drives Strong Q3 Performance

Intel Rebounds as AI Demand Drives Strong Q3 Performance

October 25, 2025
Function1 Dubai AI Conference Returns November 2025

Function1 Dubai AI Conference Returns November 2025

October 24, 2025
Next Post
Google Logo

OpenAI Expresses Interest in Acquiring Chrome as Google Faces Potential Breakup

open ai gradient logo

OpenAI Launches 'gpt-image-1': Bringing Advanced Image Generation to Adobe, Figma and Beyond

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

ADVERTISEMENT

Popular Stories

  • Azad Abdullahi Launches Snoozify: A Free Chrome Extension That Lets You Snooze Tabs and Bring Them Back Later

    Azad Abdullahi Launches Snoozify: A Free Chrome Extension That Lets You Snooze Tabs and Bring Them Back Later

    0 shares
    Share 0 Tweet 0
  • Microsoft and Uber Alum Raises $3M for YC-Backed Munify, a Neobank for the Egyptian Diaspora

    0 shares
    Share 0 Tweet 0
  • Replit Raises $250M Series C at $3B Valuation and Launches Agent 3

    0 shares
    Share 0 Tweet 0
  • UAE’s VentureOne and Technology Innovation Institute Launch QuantumConnect to Secure Future Communications

    0 shares
    Share 0 Tweet 0
  • Doha AI Ethics Conference 2025: Global Tech Leaders to Debate Cultural Values in AI

    0 shares
    Share 0 Tweet 0

Where the Middle East Tech Revolution Begins – Covering tech innovations, startups, and developments across the Middle East..​

Facebook X-twitter Instagram Linkedin

Get In Touch

United Arab Emirates (Dubai)

Email: Info@techsoma.net

Quick Links

Advertise on Techsoma

Publish your Articles

T & C

Privacy Policy

© 2025 — Techsoma Middle East. All Rights Reserved

Add New Playlist

No Result
View All Result

© 2025 JNews - Premium WordPress news & magazine theme by Jegtheme.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?