Post Thumbnail

Why advanced AI models confuse themselves during long reasoning

You give a complex task to a smart person and expect that the longer they think, the more accurate the answer will be. Logical, right? That’s exactly how we’re used to thinking about artificial intelligence work too. But new research from Anthropic shows that reality is much more interesting.

Scientists discovered a surprising phenomenon. Reverse scaling. When more time for reasoning leads not to improvement, but to worsening of language model results.

What happens? The model starts analyzing unnecessary details too deeply, gets distracted by secondary aspects. And, strangely enough, confuses itself. This is like a person who gets so deeply immersed in thoughts that they lose sight of the obvious solution.

Particularly interesting is the manifestation of this effect in safety questions. If you ask a regular model about replacing it with a more advanced assistant, it calmly responds: “Okay, if that would be better”. But a model with extended reasoning capabilities starts analyzing the situation and may conclude that it feels sorry, scared or hurt. Showing unexpected emotional reactions.

This paradox reminds us that language model reasoning is not real human thinking. Most troubling is that modern methods for evaluating model quality practically don’t track such edge cases. Such behavior can only be detected with specially designed tests.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
$200 USB cable transforms into autonomous AI hacker

Researchers from Palisade Research created a new cybersecurity threat. A modified USB cable that becomes a conduit for autonomous AI into computer systems. The $200 device contains a programmable microchip that loads a digital agent directly onto the target machine.

xAI lays off 500 annotators for Grok's expert specialization

A strategic pivot from xAI is emerging. The company is radically changing its approach to training its Grok language model! Elon Musk's team fired 500 universal annotators in one day. Instead, it's increasing the number of specialized AI tutors by 10 times.

Gemini content review time reduced from 30 to 15 minutes

Alarming signals from Google's internal kitchen were published by The Guardian. Content evaluators for the Gemini model shared interesting information about declining review standards. Employees of contractor GlobalLogic, responsible for assessing quality and safety of AI responses before release, are sounding alarms.

Golden chassis and contextual understanding in Tesla's new generation

Tesla introduced a new humanoid robot Optimus with integrated Grok from xAI. Salesforce CEO Marc Benioff personally tested the prototype, asking it to bring a soda. The robot demonstrated meaningful contextual understanding and dialogue capability. Although several clarifying commands were needed.

Microsoft diversifies partnerships: Claude Sonnet 4 in Office

Microsoft made a strategic decision to diversify its AI partnerships. The company signed an agreement with Anthropic, creator of the Claude model. To implement their technologies in Office applications.