Virtual limitations: Reinforcement learning has been used to train many bots to walk inside simulations, but transferring that ability to the real world is hard. “Many of the videos that you see of virtual agents are not at all realistic,” says Chelsea Finn, an AI and robotics researcher at Stanford University, who was not involved in the work. Small differences between the simulated physical laws inside a virtual environment and the real physical laws outside it—such as how friction works between a robot’s feet and the ground—can lead to big failures when a robot tries to apply what it has learned. A heavy two-legged robot can lose balance and fall if its movements are even a tiny bit off.
Double simulation: But training a large robot through trial and error in the real world would be dangerous. To get around these problems, the Berkeley team used two levels of virtual environment. In the first, a simulated version of Cassie learned to walk by drawing on a large existing database of robot movements. This simulation was then transferred to a second virtual environment called SimMechanics that mirrors real-world physics with a high degree of accuracy—but at a cost in running speed. Only once Cassie seemed to walk well there was the learned walking model loaded into the actual robot.
The real Cassie was able to walk using the model learned in simulation without any extra fine-tuning. It could walk across rough and slippery terrain, carry unexpected loads, and recover from being pushed. During testing, Cassie also damaged two motors in its right leg but was able to adjust its movements to compensate. Finn thinks that this is exciting work. Edward Johns, who leads the Robot Learning Lab at Imperial College London agrees. “This is one of the most successful examples I have seen,” he says.
The Berkeley team hopes to use their approach to add to Cassie’s repertoire of movements. But don’t expect a dance-off anytime soon.