Post Thumbnail

DeepMind replaces Asimov’s laws with adaptive dataset for robots

Google DeepMind under the leadership of Carolina Parada is rethinking fundamental principles of robot safety and wants to move away from classical Asimov’s laws to a more flexible, trainable system. The new so-called “Asimov Dataset” represents not a rigid set of rules, but an adaptive base of potentially dangerous situation scenarios.

The key difference of the new approach lies in the method of risk processing. Modern robots don’t simply follow preset directives – they learn to analyze context. And make decisions based on an extensive base of examples. When a robot sees a glass on the edge of a table, it doesn’t execute a pre-programmed command. But evaluates the situation and moves the object to a safe position. Discovering an object on the floor, the system recognizes potential danger for a person and eliminates it.

The dataset is formed based on analysis of real incidents from different countries of the world, which ensures diversity of cultural and social contexts. Each scenario is accompanied by visual examples and instructions for risk minimization, creating a comprehensive educational environment for artificial intelligence.

This approach differs with 3 fundamental features: dynamic data updating, hybrid control with human participation and openness for testing by third-party developers. Thus, at DeepMind they believe that the “Asimov Dataset” creates not just technology, but an evolving safety ecosystem.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

Chinese platform QiMeng creates processors at Intel 486 and Arm level

Chinese scientists developed a new AI platform capable of independently designing processors at the level of human experts. Researchers from the State Laboratory for Processor Development and the Intelligent Software Research Center presented an open-source project called QiMeng.

Meta AI turns private AI chats into public posts without knowledge

Meta AI app turned out to be a real catastrophe for user privacy. Turning their private conversations with artificial intelligence into public content. Imagine a modern horror movie: your entire query history became publicly accessible, and you didn't even suspect it.

Google released Gemini 2.5 Flash-Lite with 1 million token context

Google introduced the Gemini 2.5 Flash-Lite model. And it becomes a real breakthrough in price-performance ratio, opening new horizons of advanced technology accessibility.

Anthropic created 17 virtual worlds for testing AI sabotage

Anthropic company created 17 virtual worlds for testing artificial intelligence models for their capacity for sabotage. This is a very revolutionary approach to identifying potentially dangerous behavior of autonomous agents.

NVIDIA couldn't sell AI chips until OpenAI appeared

NVIDIA head Jensen Huang shared an interesting story that today looks like a fateful moment in the development of modern technologies.