News Report Technology
May 14, 2025

GPT-4 Fails On Real Healthcare Tasks: New HealthBench Test Reveals The Gaps

In Brief

Researchers introduced HealthBench, a new benchmark that tests LLMs like GPT-4 and Med-PaLM 2 on real medical tasks.

GPT-4 Fails On Real Healthcare Tasks: New HealthBench Test Reveals The Gaps

Large language models are everywhere — from search to coding and even patient-facing health tools. New systems are being introduced almost weekly, including tools that promise to automate clinical workflows. But can they actually be trusted to make real medical decisions? A new benchmark, called HealthBench, says not yet. According to the results, models like GPT-4 (from OpenAI) and Med-PaLM 2 (from Google DeepMind) still fall short on practical healthcare tasks, especially when accuracy and safety matter most.

7

HealthBench is different from older tests. Instead of using narrow quizzes or academic question sets, it challenges AI models with real-world tasks. These include picking treatments, making diagnoses, and deciding what steps a doctor should take next. That makes the results more relevant to how AI might actually be used in hospitals and clinics.

6

Across all tasks, GPT-4 performed better than previous models. But the margin was not enough to justify real-world deployment. In some cases, GPT-4 chose incorrect treatments. In others, it offered advice that could delay care or even increase harm. The benchmark makes one thing clear: AI might sound smart, but in medicine, that’s not good enough.

Real Tasks, Real Failures: Where AI Still Breaks in Medicine

One of HealthBench’s biggest contributions is how it tests models. It includes 14 real-world healthcare tasks across five categories: treatment planning, diagnosis, care coordination, medication management, and patient communication. These aren’t made-up questions. They come from clinical guidelines, open datasets, and expert-authored resources that reflect how actual healthcare works.

On many tasks, large language models showed consistent errors. For instance, GPT-4 often failed at clinical decision-making, such as determining when to prescribe antibiotics. In some examples, it was overprescribed. In others, it missed important symptoms. These types of mistakes are not just wrong — they could cause real harm if used in actual patient care.

5

The models also struggled with complex clinical workflows. For example, when asked to recommend follow-up steps after lab results, GPT-4 gave generic or incomplete advice. It often skipped context, didn’t prioritize urgency, or lacked clinical depth. That makes it dangerous in cases where time and order of operations are critical.

In medication-related tasks, accuracy dropped further. The models frequently mixed up drug interactions or gave outdated guidance. That’s especially alarming since medication errors are already one of the top causes of preventable harm in healthcare.

4

Even when the models sounded confident, they weren’t always right. The benchmark revealed that fluency and tone didn’t match clinical correctness. This is one of the biggest risks of AI in health — it can “sound” human while being factually wrong.

Why HealthBench Matters: Real Evaluation for Real Impact

Until now, many AI health evaluations used academic question sets like MedQA or USMLE-style exams. These benchmarks helped measure knowledge but didn’t test whether models could think like doctors. HealthBench changes that by simulating what happens in actual care delivery.

Instead of one-off questions, HealthBench looks at the entire decision chain — from reading a symptom list to recommending care steps. That gives a more complete picture of what AI can or can’t do. For instance, it tests if a model can manage diabetes across multiple visits or track lab trends over time.

3

The benchmark also grades models on multiple criteria, not just accuracy. It checks for clinical relevance, safety, and the potential to cause harm. That means it’s not enough to get a question technically right — the answer also has to be safe and useful in real-life settings.

Another strength of HealthBench is transparency. The team behind it released all prompts, scoring rubrics, and annotations. That allows other researchers to test new models, improve evaluations, and build on the work. It’s an open call to the AI community: if you want to claim your model is useful in healthcare, prove it here.

GPT-4 and Med-PaLM 2 Still Not Ready for Clinics

Despite recent hype around GPT-4 and other large models, the benchmark shows they still make serious medical mistakes. In total, GPT-4 only achieved around 60–65% correctness on average across all tasks. In high-stakes areas like treatment and medication decisions, the score was even lower.

Med-PaLM 2, a model tuned for healthcare tasks, didn’t perform much better. It showed slightly stronger accuracy in basic medical recall but failed at multi-step clinical reasoning. In several scenarios, it offered advice that no licensed physician would support. These include misidentifying red-flag symptoms and suggesting non-standard treatments.

2

The report also highlights a hidden danger: overconfidence. Models like GPT-4 often deliver wrong answers in a confident, fluent tone. That makes it hard for users — even trained professionals — to detect errors. This mismatch between linguistic polish and medical precision is one of the key risks of deploying AI in healthcare without strict safeguards.

To put it plainly: sounding smart isn’t the same as being safe.

What Needs to Change Before AI Can Be Trusted in Healthcare

The HealthBench results aren’t just a warning. They also point to what AI needs to improve. First, models must be trained and evaluated using real-world clinical workflows, not just textbooks or exams. That means including doctors in the loop — not just as users, but as designers, testers, and reviewers.

1

Second, AI systems should be built to ask for help when uncertain. Right now, models often guess instead of saying, “I don’t know.” That’s unacceptable in healthcare. A wrong answer can delay diagnosis, increase risk, or break patient trust. Future systems must learn to flag uncertainty and refer complex cases to humans.

Third, evaluations like HealthBench must become the standard before real deployment. Just passing an academic test is no longer enough. Models must prove they can handle real decisions safely, or they should stay out of clinical settings entirely.

The Path Ahead: Responsible Use, Not Hype

HealthBench doesn’t say that AI has no future in healthcare. Instead, it shows where we are today — and how far there is still to go. Large language models can help with administrative tasks, summarization, or patient communication. But for now, they are not ready to replace or even reliably support doctors in clinical care.

Responsible use means clear limits. It means transparency in evaluation, partnerships with medical professionals, and constant testing against real medical tasks. Without that, the risks are too high.

The creators of HealthBench invite the AI and healthcare community to adopt it as a new standard. If done right, it could push the field forward — from hype to real, safe impact.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles
Alisa Davidson
Alisa Davidson

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

Hot Stories
Join Our Newsletter.
Latest News

The Calm Before The Solana Storm: What Charts, Whales, And On-Chain Signals Are Saying Now

Solana has demonstrated strong performance, driven by increasing adoption, institutional interest, and key partnerships, while facing potential ...

Know More

Crypto In April 2025: Key Trends, Shifts, And What Comes Next

In April 2025, the crypto space focused on strengthening core infrastructure, with Ethereum preparing for the Pectra ...

Know More
Read More
Read more
Aurora Labs Unveils ‘Aurora Blocks’ Incubator On NEAR Protocol To Accelerate Virtual Chain Development
News Report Technology
Aurora Labs Unveils ‘Aurora Blocks’ Incubator On NEAR Protocol To Accelerate Virtual Chain Development
May 14, 2025
How Lumia Is Turning Real Assets into DeFi Powerhouses
Interview Business Markets Software Technology
How Lumia Is Turning Real Assets into DeFi Powerhouses
May 14, 2025
Ethereum Foundation Introduces ‘Trillion Dollar Security’ Initiative To Enhance Security
News Report Technology
Ethereum Foundation Introduces ‘Trillion Dollar Security’ Initiative To Enhance Security
May 14, 2025
Why Incentivized Governance is the Next Frontier for DAOs
Opinion Business Markets Technology
Why Incentivized Governance is the Next Frontier for DAOs
May 14, 2025