Why advanced AI models confuse themselves during long reasoning

Post Thumbnail

You give a complex task to a smart person and expect that the longer they think, the more accurate the answer will be. Logical, right? That’s exactly how we’re used to thinking about artificial intelligence work too. But new research from Anthropic shows that reality is much more interesting.

Scientists discovered a surprising phenomenon. Reverse scaling. When more time for reasoning leads not to improvement, but to worsening of language model results.

What happens? The model starts analyzing unnecessary details too deeply, gets distracted by secondary aspects. And, strangely enough, confuses itself. This is like a person who gets so deeply immersed in thoughts that they lose sight of the obvious solution.

Particularly interesting is the manifestation of this effect in safety questions. If you ask a regular model about replacing it with a more advanced assistant, it calmly responds: “Okay, if that would be better”. But a model with extended reasoning capabilities starts analyzing the situation and may conclude that it feels sorry, scared or hurt. Showing unexpected emotional reactions.

This paradox reminds us that language model reasoning is not real human thinking. Most troubling is that modern methods for evaluating model quality practically don’t track such edge cases. Such behavior can only be detected with specially designed tests.

Почитать из последнего
UBTech will send Walker S2 robots to serve on China's border for $37 million
Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.
Anthropic accidentally revealed an internal document about Claude's "soul"
Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.
Jensen Huang ordered Nvidia employees to use AI everywhere
Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.
AI chatbots generate content that exacerbates eating disorders
A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.
OpenAGI released the Lux model that overtakes Google and OpenAI
Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.