Post Thumbnail

Why advanced AI models confuse themselves during long reasoning

You give a complex task to a smart person and expect that the longer they think, the more accurate the answer will be. Logical, right? That’s exactly how we’re used to thinking about artificial intelligence work too. But new research from Anthropic shows that reality is much more interesting.

Scientists discovered a surprising phenomenon. Reverse scaling. When more time for reasoning leads not to improvement, but to worsening of language model results.

What happens? The model starts analyzing unnecessary details too deeply, gets distracted by secondary aspects. And, strangely enough, confuses itself. This is like a person who gets so deeply immersed in thoughts that they lose sight of the obvious solution.

Particularly interesting is the manifestation of this effect in safety questions. If you ask a regular model about replacing it with a more advanced assistant, it calmly responds: “Okay, if that would be better”. But a model with extended reasoning capabilities starts analyzing the situation and may conclude that it feels sorry, scared or hurt. Showing unexpected emotional reactions.

This paradox reminds us that language model reasoning is not real human thinking. Most troubling is that modern methods for evaluating model quality practically don’t track such edge cases. Such behavior can only be detected with specially designed tests.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

How language models transfer knowledge through random numbers

Have you ever wondered if numbers can store knowledge? Scientists discovered an amazing phenomenon. Language models can transfer their behavioral traits through sequences of digits that look like random noise.

Alibaba introduced Quark AI smart glasses with Snapdragon AR1 chip

Chinese tech giant Alibaba introduced its first model of Quark AI smart glasses at the World Conference on Artificial Intelligence in Shanghai.

Why advanced AI models confuse themselves during long reasoning

You give a complex task to a smart person and expect that the longer they think, the more accurate the answer will be. Logical, right? That's exactly how we're used to thinking about artificial intelligence work too. But new research from Anthropic shows that reality is much more interesting.

Z.AI introduced GLM-4.5 with 355 billion parameters and open source

Meet the new technological heavyweight! Z.AI company introduced the open language model GLM-4.5, which is ready to challenge Western giants not only with capabilities but also with accessibility.

Altman predicted complete disappearance of some professions

During a speech in Washington, OpenAI head Sam Altman made an interesting statement about the future of the job market.