
Gemini Robotics: new approach to AI robot control
Google introduced the Gemini Robotics system, bringing AI agents into the physical world. The company developed an advanced agent system for robot control. Capable of better reasoning and planning, interacting with humans and using tools like web search.
Inside the system, 2 models work simultaneously. Gemini Robotics-ER 1.5 and Gemini Robotics 1.5 perform different functions in robot control. The first model serves as high-level brain, analyzes environment and human actions or commands, creates detailed task execution plan and calls tools when necessary.
Gemini Robotics 1.5 acts as executor, transforming instructions into precise motor commands for robot. For example, when requested to correctly sort trash based on user location, the system works step by step.
Gemini Robotics-ER 1.5 analyzes request, accesses internet to understand trash sorting rules in specific country. Evaluates available trash and gives commands like bottle in left pile, napkin in right. The model outputs trace of its reasoning, making system more interpretable.
Gemini Robotics 1.5 receives commands from ER and transforms them into precise movement trajectories. If something in environment changes during process, ER notices this and corrects instructions. When robot shape changes, entire system doesn’t need adaptation, adjusting second model is enough.
Gemini Robotics 1.5 is a vision-language-action model, transforming visual information and instructions into robot commands, thinking before acting and explaining its process. Gemini Robotics-ER 1.5 is responsible for planning and logical decisions, can call digital tools and create step-by-step plans.
The models allow robots to execute complex multi-step tasks, learn from different device types and act more transparently and safely.