- Gemini Robotics is a new model
- It focuses on the physical world and will be used by robots
- It’s visual, interactive, and general
Google Gemini is good at many things that happen inside a screen, including generative text and images. Still, the latest model, Google Robotics, is a vision language action model that moves the generative AI into the physical world and could substantially speed up the humanoid robot revolution race.
Gemini Robotics, which Google’s DeepMind unveiled on Wednesday, improves Gemini’s abilities in three key areas:
- Dexterity
- Interactivity
- Generalization
Each of these three aspects significantly impacts the success of robotics in the workplace and unknown environments.
Generalization allows a robot to take Gemini’s vast knowledge about the world and things, apply it to new situations, and accomplish tasks on which it’s never been trained. In one video, researchers show a pair of robot arms controlled by Gemini Robotics, a table-top basketball game, and ask it to “slam dunk the basketball.”
Even though the robot hadn’t seen the game before, it picked up the small orange ball and stuffed it through the plastic net.
Google Gemini Robotics also makes robots more interactive and able to respond not only to changing verbal assignments but also to unpredictable conditions.
In another video, researchers asked the robot to put grapes in a bowl with bananas, but then they moved the bowl around while the robot arm adjusted and still managed to put the grapes in a bowl.
Google also demonstrated the robot’s dextrous capabilities, which let it tackle things like playing tic-tac-toe on a wooden board, erasing a whiteboard, and folding paper into origami.
Instead of hours of training on each task, the robots respond to near-constant natural language instructions and perform the tasks without guidance. It’s impressive to watch.
Naturally, adding AI to robotics is not new.
Last year, OpenAI partnered up with Figure AI to develop a humanoid robot that can work out tasks based on verbal instructions. As with Gemini Robotics, Figure 01’s visual language model works with the OpenAI speech model to engage in back-and-forth conversations about tasks and changing priorities.
In the demo, the humanoid robot stands before dishes and a drainer. It’s asked about what it sees, which it lists, but then the interlocutor changes tasks and asks for something to eat. Without missing a beat, the robot picks up an Apple and hands it to him.
While most of what Google showed in the videos was disembodied robot arms and hands working through a wide range of physical tasks, there are grander plans. Google is partnering with Apptroniks to add the new model to its Apollo humanoid Robot.
Google will connect the dots with additional programming, a new advanced visual language model called Gemini Robotics-ER (embodied reasoning).
Gemini Robotics-ER will enhance robotics spatial reasoning and should help robot developers connect the models to existing controllers.
Again, this should improve on-the-fly reasoning and make it possible for the robots to quickly figure out how to grasp and use unfamiliar objects. Google calls Gemini Rotbotics ER an end-to-end solution and claims it “can perform all the steps necessary to control a robot right out of the box, including perception, state estimation, spatial understanding, planning and code generation.”
Google is providing Gemini robotics -ER model to several business- and research-focused robotics firms, including Boston Dynamics (makers of Atlas), Agile Robots, and Agility Robots.
All-in-all, it’s a potential boon for humanoid robotics developers. However, since most of these robots are designed for factories or still in the laboratory, it may be some time before you have a Gemini-enhanced robot in your home.