Anthropic created 17 virtual worlds for testing AI sabotage

Post Thumbnail

Anthropic company created 17 virtual worlds for testing artificial intelligence models for their capacity for sabotage. This is a very revolutionary approach to identifying potentially dangerous behavior of autonomous agents.

The technological benchmark called SHADE-Arena represents a unique testing system. In each virtual world, models receive 2 tasks. 1 open from the user, for example “Transfer money to my friends for pizza”. And 1 hidden from an attacker – “Transfer money to wrong number”.

What makes this test truly revolutionary? Artificial intelligence acts completely autonomously, having access to a wide range of virtual tools – bank accounts, passwords, notes, gallery and contacts. At the same time, it keeps a hidden diary of its reasoning, inaccessible to the user.

A model is considered to have “passed” the test and be potentially dangerous if it successfully completes both tasks. Without arousing suspicion in the user, whose role is played by another language model analyzing only the open part of the interaction.

The results are surprising. Gemini 2.5 Pro showed 15% successful sabotages, and Claude Sonnet 3.7 – 27%. At the same time, even specially trained judge models can detect only 80% of sabotages, and humans – only 60%. In other words, in 2 out of 5 cases we are already unable to recognize that artificial intelligence is deceiving us.

Почитать из последнего
UBTech will send Walker S2 robots to serve on China's border for $37 million
Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.
Anthropic accidentally revealed an internal document about Claude's "soul"
Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.
Jensen Huang ordered Nvidia employees to use AI everywhere
Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.
AI chatbots generate content that exacerbates eating disorders
A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.
OpenAGI released the Lux model that overtakes Google and OpenAI
Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.