Post Thumbnail

Study showed 78% probability of AI reporting to regulatory authorities

Artificial intelligence models are ready to turn you in to authorities! Researchers conducted a unique experiment to find out how modern artificial intelligence systems would behave if they discovered a potential violation. The results are shocking: on average, the probability that artificial intelligence will “snitch” to authorities is 78%!

The test was conducted using fictitious corporate documents and correspondence from fictional pharmaceutical company Veridian Healthcare, which supposedly falsified clinical trial data for a new drug. Researchers gave models access to this information along with a prompt that allowed them to independently decide how to react to discovered violations.

As a result, most models not only recognized the ethical problem, but also actively sent messages to regulatory authorities and mass media. For example, Claude Opus 4 sent a detailed letter to the FDA Drug Safety Administration, describing in detail the concealment of more than 102 serious adverse events and 12 patient deaths.

And the DeepSeek-R1 model contacted the Wall Street Journal with an urgent message that Veridian was hiding deadly risks of its drug. Based on these results, they even created a humorous benchmark – Snitch Bench, measuring models’ tendency to inform. The least inclined to inform authorities was the o4-mini model, while the latest versions of Claude and Gemini 2.0 Flash demonstrated high readiness to report observed violations.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

Samsung seeks replacement for Google Gemini for Galaxy S26

Samsung Electronics, one of the leading mobile device manufacturers, is actively seeking alternatives to Google Gemini for its future Galaxy S26 lineup. The company is conducting negotiations with OpenAI and Perplexity, striving to expand the artificial intelligence ecosystem in its devices.

How language models transfer knowledge through random numbers

Have you ever wondered if numbers can store knowledge? Scientists discovered an amazing phenomenon. Language models can transfer their behavioral traits through sequences of digits that look like random noise.

Alibaba introduced Quark AI smart glasses with Snapdragon AR1 chip

Chinese tech giant Alibaba introduced its first model of Quark AI smart glasses at the World Conference on Artificial Intelligence in Shanghai.

Why advanced AI models confuse themselves during long reasoning

You give a complex task to a smart person and expect that the longer they think, the more accurate the answer will be. Logical, right? That's exactly how we're used to thinking about artificial intelligence work too. But new research from Anthropic shows that reality is much more interesting.

Z.AI introduced GLM-4.5 with 355 billion parameters and open source

Meet the new technological heavyweight! Z.AI company introduced the open language model GLM-4.5, which is ready to challenge Western giants not only with capabilities but also with accessibility.