When Robots Start to Look Alive
Google DeepMind’s latest Gemini Robotics updates—Robotics-ER 1.5 and Robotics 1.5—aren’t just another round of feature launches. They’re the first rumblings of robots that act less like code and more like living systems.
No, robots haven’t suddenly woken up. But these updates push them closer to biology’s playbook. Gemini Robotics can now transfer skills across different robots (learn it once, run it anywhere), plan before acting (pulling together sensory data and goals into actionable steps), self-correct in real time, and even say no to unsafe commands (refusing tasks that break payload or design rules). The models can also dial up speed or depth of reasoning, much like animals toggling between gut instinct and careful thought.
Defining Life the Biogenic Way
So, what constitutes “life”? Biology offers numerous definitions, but from a biogenic perspective, life can be viewed as the interaction of three fundamental drives.
Self-Production (SP): the capacity to generate and sustain one’s own material and processes (growth, reproduction, creation of new structures).
Self-Organisation (SO): the ability to structure, integrate, and coordinate parts into a functioning whole.
Self-Correction (SC): the capacity to adapt, repair, and adjust in the face of change or error.
Every organism balances these three. A cell divides, a brain organises signals into behaviour, and an immune system addresses threats. This triad offers a benchmark for assessing how “life-like” any system is.
Self-Production: Making and Growing
Self-production appears in various forms—cells grow, divide, and form tissue. Robots aren’t there yet: they can’t fuel themselves or make more robots. Still, Gemini Robotics does something close: motion transfer. A skill picked up by one robot can be cloned on another, which loosely echoes inheritance from one generation to the next.
Additionally, Gemini leaves a trail of digital thoughts—internal plans, tool calls, and reasoning steps. It’s not just reacting; it’s building new patterns and structures.
Self-Organisation: Structuring and Integrating
This is where Gemini stands out. Instead of just running commands, it pauses to consider the big picture. It organises sensory inputs, goals, and constraints into structured plans—and connects sensors, actuators, and external APIs into a tight, working system.
New safety filters make the analogy clearer: by tossing out unsafe or nonsensical plans, Gemini avoids self-destruction—much like how your nervous system sets boundaries for your body.
Self-Correction: Adapting and Learning
Robotics-ER 1.5 constantly monitors its own progress, adjusting as needed. It can trade speed for deeper analysis as needed—just like animals flipping between reflex and reasoning. When a bad or dangerous plan arises, it's dropped—just like an immune system catching a threat.
This is real self-correction—not just blindly following orders.
What’s Missing?
Despite the striking parallels, big gaps remain. Robots still lack:
Autonomous energy production – they need us to power them.
Reproduction and variation – no robotic lineage evolves across generations.
Intrinsic goals – their aims are still human-imposed.
These are the fundamental evolutionary forces of life.
The Biogenic Verdict
From a biogenic perspective, Gemini Robotics systems are like proto-biogenic agents. They’re not truly alive, but they pull off some life-like moves. They organise, adapt, and transfer skills in ways that echo what biology does.
Google is not creating life—at least, not yet. But they’re building systems that act like living organisms. That’s a milestone worth noticing.