Post Thumbnail

OpenAI models proved superiority in mathematical tasks

For the first time, a large-scale testing of their capabilities was conducted on fresh mathematical olympiad problems, and the first part of the prestigious American Invitational Mathematics Examination (AIME) became the platform for “competition”.

The testing included 15 problems, each of which was presented to AI models four times to obtain reliable results. The evaluation system was based on a color scheme: green meant successful solution in all four attempts, yellow – from one to three successful attempts, red – complete absence of correct solutions.

The results were unexpected. OpenAI models demonstrated significant superiority over competitors, including the acclaimed Chinese model DeepSeek R1. Particularly impressive results were shown by OpenAI’s o3-mini model, achieving 78.33% accuracy, although this is lower than the previously reported 87.3% on last year’s tests.

Interestingly, OpenAI’s o1 model even improved its performance compared to last year, increasing accuracy from 74.4% to 76.67%. Meanwhile, DeepSeek R1 showed a significant decrease in efficiency – from last year’s 79.8% to 65% on new problems. Even more dramatic was the performance drop of the distilled version R1-Qwen-14b – from 69.7% to 50%.

Special attention should be paid to the Claude 3.6 Sonnet model, which unexpectedly showed extremely low results, failing to solve practically any problem “out of the box”.

It’s important to note that later at least three problems from the testing were found to be publicly available on the internet, which could have affected the experiment’s purity. Nevertheless, the obtained results provide interesting food for thought about different AI models’ ability to generalize and their resistance to overfitting.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

Grok 4 scored 57% in "The Last Exam" versus 22% for Gemini 2.5 Pro

Elon Musk presented a new version of his neural network – Grok 4. The maximum version – Grok 4 Heavy – can run multiple computations simultaneously and scores 57% in the most difficult test "The Last Exam of Humanity". For comparison, the previous leader Gemini 2.5 Pro showed only 22%.

Researchers found AI vulnerability through facts about cats

I was mildly surprised by this news. Do you know that an ordinary mention of cats can confuse the most advanced artificial intelligence models? Scientists discovered an amazing vulnerability in neural networks' thinking processes.

US IT companies fired 94,000 employees in six months due to AI

In the first half of 2025, American IT companies fired more than 94,000 technical specialists. This is not just cost-cutting. This is structural change under the influence of artificial intelligence.

OpenAI hired the first psychiatrist in the AI industry to study ChatGPT's impact on the psyche

OpenAI company announced that it hired a professional clinical psychiatrist with experience in forensic psychiatry. To research the impact of its artificial intelligence products on users' mental health.

Historic milestone: Amazon's millionth robot delivered to Japan

Amazon reached a historic milestone! And after 13 years of implementing robots in its warehouse facilities, the company announced reaching the mark of 1 million robotic devices. The millionth robot was recently delivered to an Amazon warehouse in Japan.