What you see when Boston Dynamics’ humanoid robot does a backflip or its Spot dog robot fights off a human and opens a door is incredible hardware engineering, to be sure. But what you don’t see is the wildly complex underlying code that makes it possible. What comes so easily to you–OK maybe not backflips, just walking–requires extreme coordination, which roboticists have to replicate, a kind of dance of motors working in concert.

Pity the engineers who have to write out all that code. Over at Google, researchers have a secret weapon to teach robots to move that’s both less taxing and more adorable: dogs. They gather motion-capture videos from a public dataset, then feed that data into a simulator to create a digital version of the pooch. The researchers then translate the digital version of the real dog into a digital version of their four-legged robot–Laikago, which has a rectangular body and skinny legs. Then they port those algorithms into the physical version of Laikago. (The robot is named, by the way, after Laika, the Soviet space dog who was the first animal to orbit Earth.)

A robot works quite differently than a biological dog; it has motors instead of muscles, and in general it’s a lot stiffer. But thanks to this translation work, Laikago has learned to move like a real-life canine. Not only that, its learned gait is faster than the fastest gait provided by the manufacturer of the robot–though in fairness it’s not yet as stable. The new system could be the first steps (sorry) toward robots that learn to move not thanks to exhaustive coding, but by watching videos of animals running and jumping.

“The drawback with the kind of manual approach is that it’s not really scalable for every skill that we want a robot to perform,” says AI researcher Jason Peng, lead author on a new paper describing the system. “We need long engineering hours in order to come up with the different strategies.”

With this new approach, reinforcement learning algorithms do much of that work. Even though they’re both quadrupeds, the robot’s body is quite different from the dog’s body, so in the computer simulations the digital version of the robot has to figure out how to imitate the motion of the digital version of the dog without directly copying its mechanics. “So what the reinforcement learning algorithm does is it tries to find a way that allows the robot to be as close to the original reference motion as possible,” Peng says.

Image may contain: Construction Crane

The WIRED Guide to Robots

Everything you wanted to know about soft, hard, and nonmurderous automatons.

The algorithm tries random movements, and gets a digital “reward” if it gets closer to the dog’s reference motion–basically a thumbs-up message that says that was good, do that kind of thing again. If it tries something that’s not so hot, it gets a digital “demerit”–don’t do that kind of thing again. With this reward system, over many iterations the simulated robot teaches itself to move like the dog.

The next challenge is known as sim-to-real; that is, taking what the system has learned in simulation and getting it to work in a physical robot. This is tricky because a simulation is an imperfect and highly-simplified version of the real world. Mass and friction are represented as accurately as possible, but not perfectly. The actions of the simulated robot in the digital world don’t map precisely to movements of the real robot in the lab.

Courtesy of Google

So Peng and his colleagues built not one definitive robot simulation, but a range of possibilities for what the robot’s behavior could be. They randomized friction in the simulation, for instance, and tweaked the latency between when you send the robot a command and when it actually executes the order. “The idea is that if we train the simulation with enough diversity, it might learn a good enough set of strategies, such that one of those strategies will work in the real world,” Peng says.