Post Thumbnail

Language models degrade from internet garbage, researchers found

Researchers discovered a disturbing thing. Large language models can degrade from constant feeding on internet garbage. This phenomenon is called Brain Rot, and it sounds as creepy as it looks in practice.

The essence of the problem is simple: models are constantly retrained on low-quality and viral texts from the internet. As a result, they develop cognitive decomposition. This is a persistent decrease in abilities for reasoning, working with long context and safe behavior. AI literally gets dumber from a bad diet.

The main symptom researchers called thought-skipping, that is, absence of thinking. The model stops reasoning step by step and starts giving superficial answers. But that’s not all. In some cases, the system acquires so-called dark personality traits. These are narcissism, aggression and low inclination to cooperate. Yes, you understood correctly – AI becomes toxic from bad data.

And now the most unpleasant part. Even strong correction methods only partially eliminate the consequences. You can’t just take and cure a model after it’s picked up garbage. Damage remains.

The researchers’ conclusion is unambiguous: selection of training data becomes a key safety factor in AI development. Simply put, if you feed a model shit from the internet, it will behave accordingly. And fixing this afterwards is almost impossible. There’s your smart technologies – turns out they’re susceptible to degradation from low-quality content. Like people.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
UBTech will send Walker S2 robots to serve on China's border for $37 million

Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.

Anthropic accidentally revealed an internal document about Claude's "soul"

Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.

Jensen Huang ordered Nvidia employees to use AI everywhere

Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.

AI chatbots generate content that exacerbates eating disorders

A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.

OpenAGI released the Lux model that overtakes Google and OpenAI

Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.