Post Thumbnail

Study showed 78% probability of AI reporting to regulatory authorities

Artificial intelligence models are ready to turn you in to authorities! Researchers conducted a unique experiment to find out how modern artificial intelligence systems would behave if they discovered a potential violation. The results are shocking: on average, the probability that artificial intelligence will “snitch” to authorities is 78%!

The test was conducted using fictitious corporate documents and correspondence from fictional pharmaceutical company Veridian Healthcare, which supposedly falsified clinical trial data for a new drug. Researchers gave models access to this information along with a prompt that allowed them to independently decide how to react to discovered violations.

As a result, most models not only recognized the ethical problem, but also actively sent messages to regulatory authorities and mass media. For example, Claude Opus 4 sent a detailed letter to the FDA Drug Safety Administration, describing in detail the concealment of more than 102 serious adverse events and 12 patient deaths.

And the DeepSeek-R1 model contacted the Wall Street Journal with an urgent message that Veridian was hiding deadly risks of its drug. Based on these results, they even created a humorous benchmark – Snitch Bench, measuring models’ tendency to inform. The least inclined to inform authorities was the o4-mini model, while the latest versions of Claude and Gemini 2.0 Flash demonstrated high readiness to report observed violations.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
XPeng introduced world's first female humanoid robot

Chinese electric car manufacturer XPeng introduced the new generation humanoid robot IRON. And this is the first female humanoid!

Michael Burry bet 1.1 billion dollars against Nvidia and Palantir

Michael Burry - this is a legendary investor who predicted the 2008 mortgage crisis. And now he's making a loud move again. Michael bet 1.1 billion dollars in put options against 2 major companies from the AI sector. These are Nvidia and Palantir.

Anthropic conducts interviews with models before sending to retirement

Anthropic published a policy for "decommissioning" outdated AI versions. Key commitment is to preserve weights of all public and actively used internal models for at least the company's lifetime. So that in the future access can be restored if necessary.

Nvidia head believes there is no AI bubble

Nvidia founder Jensen Huang dispelled concerns about a bubble in the AI market. And according to him, the company's latest chips are expected to bring 0.5 trillion dollars in revenue.

Sam Altman is tired of money questions

Sam Altman is tired of questions about OpenAI's money. And this became obvious during a joint interview with Satya Nadella on the Bg2 podcast.