Post Thumbnail

AI surpassed doctors by 4.6 times in the new medical test HealthBench

OpenAI presented the HealthBench language model evaluation system, which sets new standards for measuring the effectiveness of artificial intelligence systems in the medical field.

The tool was developed in collaboration with 262 practicing physicians from 60 countries worldwide. Such broad geographical coverage allows for consideration of various approaches to diagnosis and treatment characteristic of different medical schools and cultural contexts.

HealthBench is based on an extensive database of 5000 clinical scenarios modeled on real medical cases. The methodology’s distinctive feature lies in its comprehensive approach. Instead of isolated questions, synthetic dialogues between an assistant and a user are used, simulating real communication in a clinical environment.

The benchmark’s multilingualism provides a truly global assessment of artificial intelligence. This is critically important for medical systems that must function in different linguistic environments without losing accuracy.

Models are evaluated on 5 key parameters. Accuracy of provided information, completeness of response, understanding of context, quality of communication, and adherence to instructions. Such multifactorial analysis allows identification of strengths and weaknesses of each artificial intelligence system.

The test results demonstrate a significant gap between the capabilities of artificial intelligence and humans. The most effective model o3 achieved a score of 60%, followed by Grok 3 with 54% and Gemini with 52%. For comparison, practicing physicians without artificial intelligence support demonstrate a result of about 13%.

Medical specialists also experience difficulties even when attempting to improve artificial intelligence responses. While with previous generation models doctors could slightly improve the quality of answers, with the newest systems the situation has changed. Human editing of latest generation artificial intelligence responses actually reduces their quality.

I think the quantitative gap between artificial intelligence and doctors’ indicators is too large to be explained by methodological features of testing. 60% versus 13%. Considering that the benchmark was developed with the participation of the medical professionals themselves.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

OpenAI prepares first open model no weaker than O3 Mini

OpenAI company is preparing to release its first open language model. Will live up to its name, so to speak. This is a serious turn for the company that previously kept its powerful developments closed.

Grok 4 scored 57% in "The Last Exam" versus 22% for Gemini 2.5 Pro

Elon Musk presented a new version of his neural network – Grok 4. The maximum version – Grok 4 Heavy – can run multiple computations simultaneously and scores 57% in the most difficult test "The Last Exam of Humanity". For comparison, the previous leader Gemini 2.5 Pro showed only 22%.

Researchers found AI vulnerability through facts about cats

I was mildly surprised by this news. Do you know that an ordinary mention of cats can confuse the most advanced artificial intelligence models? Scientists discovered an amazing vulnerability in neural networks' thinking processes.

US IT companies fired 94,000 employees in six months due to AI

In the first half of 2025, American IT companies fired more than 94,000 technical specialists. This is not just cost-cutting. This is structural change under the influence of artificial intelligence.

OpenAI hired the first psychiatrist in the AI industry to study ChatGPT's impact on the psyche

OpenAI company announced that it hired a professional clinical psychiatrist with experience in forensic psychiatry. To research the impact of its artificial intelligence products on users' mental health.