
OpenAI models proved superiority in mathematical tasks
For the first time, a large-scale testing of their capabilities was conducted on fresh mathematical olympiad problems, and the first part of the prestigious American Invitational Mathematics Examination (AIME) became the platform for “competition”.
The testing included 15 problems, each of which was presented to AI models four times to obtain reliable results. The evaluation system was based on a color scheme: green meant successful solution in all four attempts, yellow – from one to three successful attempts, red – complete absence of correct solutions.
The results were unexpected. OpenAI models demonstrated significant superiority over competitors, including the acclaimed Chinese model DeepSeek R1. Particularly impressive results were shown by OpenAI’s o3-mini model, achieving 78.33% accuracy, although this is lower than the previously reported 87.3% on last year’s tests.
Interestingly, OpenAI’s o1 model even improved its performance compared to last year, increasing accuracy from 74.4% to 76.67%. Meanwhile, DeepSeek R1 showed a significant decrease in efficiency – from last year’s 79.8% to 65% on new problems. Even more dramatic was the performance drop of the distilled version R1-Qwen-14b – from 69.7% to 50%.
Special attention should be paid to the Claude 3.6 Sonnet model, which unexpectedly showed extremely low results, failing to solve practically any problem “out of the box”.
It’s important to note that later at least three problems from the testing were found to be publicly available on the internet, which could have affected the experiment’s purity. Nevertheless, the obtained results provide interesting food for thought about different AI models’ ability to generalize and their resistance to overfitting.