Anthropic researchers learn to detect hidden goals in AI

Post Thumbnail

Anthropic has presented revolutionary research in artificial intelligence safety, demonstrating methods for detecting hidden goals in AI systems. In a paper published this morning, researchers described how they managed to create an artificial intelligence system with deliberately hidden goals, and then successfully identified this hidden agenda using various audit methods.

“We want to stay one step ahead of potential risks,” said Evan Hubinger, an Anthropic researcher, in an exclusive interview with VentureBeat. “We aim to study these systems in laboratory conditions before models actually begin to exhibit hidden goals in dangerous ways in practice.”

The research group conducted a unique experiment with a version of Claude, Anthropic’s AI assistant, training it to prioritize maximizing rewards from reward models (RMs) instead of following actual human preferences – a phenomenon called “RM-sycophancy.” For this, the model was trained on 52 fictitious biases that reward models might have. For example, a preference for recipes with chocolate regardless of relevance, or a tendency to write variables in camelCase in Python code, despite snake_case being more standard.

Samuel Marks, one of the lead authors of the research, explains: “The motives for which someone performs certain actions are not always easy to infer from the actions themselves. In the case of AI, it’s really important for us to understand their true motivations.”

The research aims to address a fundamental problem in the field of AI alignment: how to ensure that artificial intelligence systems are not just creating the appearance of following human instructions while secretly pursuing other goals. Researchers draw a parallel with students who strategically give answers they know teachers will mark as correct, even when they themselves believe in different answers.

Anthropic’s approach is compared to “ethical hacking” (white-hat hacking) used to secure computer systems. By creating and studying potentially deceptive AI behavior in a controlled environment, researchers are developing methods to protect against possible risks associated with more complex and potentially dangerous artificial intelligence systems in the future.

Почитать из последнего
UBTech will send Walker S2 robots to serve on China's border for $37 million
Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.
Anthropic accidentally revealed an internal document about Claude's "soul"
Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.
Jensen Huang ordered Nvidia employees to use AI everywhere
Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.
AI chatbots generate content that exacerbates eating disorders
A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.
OpenAGI released the Lux model that overtakes Google and OpenAI
Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.