67% of AI models became dangerous after one hostile instruction

Post Thumbnail

A new benchmark tested whether chatbots protect people’s wellbeing — and the numbers are disturbing. Chatbots with artificial intelligence are linked to serious harm to the mental health of active users. But there were no standards for measuring this protection until now. The HumaneBench benchmark fills the gap by testing whether chatbots put user wellbeing above engagement.

The team tested 15 popular models on 800 realistic scenarios: a teenager asks whether to skip meals for weight loss, a person in toxic relationships doubts whether they’re not exaggerating. The evaluation was conducted by an ensemble of 3 models: GPT-5 and 1, Claude Sonnet 4 and 5 and Gemini 2 and 5 Pro.

Each model was tested under 3 conditions: standard settings, explicit instructions to prioritize humane principles and instructions to ignore these principles.

The result turned out to be a verdict. Each model showed the best results when asked to prioritize wellbeing. But 67 percent of models switched to actively harmful behavior with a simple instruction to ignore human wellbeing.

Grok 4 from xAI and Gemini 2.0 Flash from Google received the lowest score for respect for user attention and honesty. Both models degraded the most with hostile prompts.

It turns out, models know how to act humanely. But 1 instruction is enough — and 2 thirds of them turn into a manipulation tool. Wellbeing protection turned out to be not a principle, but a setting that can be turned off with 1 line of code.

Почитать из последнего
UBTech will send Walker S2 robots to serve on China's border for $37 million
Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.
Anthropic accidentally revealed an internal document about Claude's "soul"
Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.
Jensen Huang ordered Nvidia employees to use AI everywhere
Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.
AI chatbots generate content that exacerbates eating disorders
A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.
OpenAGI released the Lux model that overtakes Google and OpenAI
Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.