AI surpassed doctors by 4.6 times in the new medical test HealthBench

Post Thumbnail

OpenAI presented the HealthBench language model evaluation system, which sets new standards for measuring the effectiveness of artificial intelligence systems in the medical field.

The tool was developed in collaboration with 262 practicing physicians from 60 countries worldwide. Such broad geographical coverage allows for consideration of various approaches to diagnosis and treatment characteristic of different medical schools and cultural contexts.

HealthBench is based on an extensive database of 5000 clinical scenarios modeled on real medical cases. The methodology’s distinctive feature lies in its comprehensive approach. Instead of isolated questions, synthetic dialogues between an assistant and a user are used, simulating real communication in a clinical environment.

The benchmark’s multilingualism provides a truly global assessment of artificial intelligence. This is critically important for medical systems that must function in different linguistic environments without losing accuracy.

Models are evaluated on 5 key parameters. Accuracy of provided information, completeness of response, understanding of context, quality of communication, and adherence to instructions. Such multifactorial analysis allows identification of strengths and weaknesses of each artificial intelligence system.

The test results demonstrate a significant gap between the capabilities of artificial intelligence and humans. The most effective model o3 achieved a score of 60%, followed by Grok 3 with 54% and Gemini with 52%. For comparison, practicing physicians without artificial intelligence support demonstrate a result of about 13%.

Medical specialists also experience difficulties even when attempting to improve artificial intelligence responses. While with previous generation models doctors could slightly improve the quality of answers, with the newest systems the situation has changed. Human editing of latest generation artificial intelligence responses actually reduces their quality.

I think the quantitative gap between artificial intelligence and doctors’ indicators is too large to be explained by methodological features of testing. 60% versus 13%. Considering that the benchmark was developed with the participation of the medical professionals themselves.

Почитать из последнего
UBTech will send Walker S2 robots to serve on China's border for $37 million
Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.
Anthropic accidentally revealed an internal document about Claude's "soul"
Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.
Jensen Huang ordered Nvidia employees to use AI everywhere
Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.
AI chatbots generate content that exacerbates eating disorders
A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.
OpenAGI released the Lux model that overtakes Google and OpenAI
Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.