A giant leap for the mini cheetah | MIT News

0


A cheetah rushes across a hilly field, leaping over sudden gaps in the rugged terrain. The movement may seem effortless, but getting a robot to move this way is a whole different prospect.

In recent years, four-legged robots inspired by the movement of cheetahs and other animals have made great strides forward, but they still lag behind their mammalian counterparts when it comes to crossing a landscape with rapid elevation changes.

“In these contexts, you have to use vision in order to avoid failure. For example, it’s hard to avoid walking in a space if you can’t see it. Although there are some existing methods for integrating vision into leg locomotion, most of them are not really suitable for use with emerging agile robotic systems ”, explains Gabriel Margolis, PhD student at Pulkit Agrawal laboratory. , Professor of Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.

From now on, Margolis and its collaborators have developed a system that improves the speed and agility of legged robots as they jump through the holes in the field. The new control system is divided into two parts: one which processes real-time input from a video camera mounted on the front of the robot and another which translates this information into instructions on how the robot should move its body. . The researchers tested their system on MIT’s Mini Cheetah, a powerful and agile robot built in the laboratory of Sangbae Kim, a professor of mechanical engineering.

Unlike other methods of controlling a four-legged robot, this two-part system does not require you to map the terrain in advance, so the robot can go anywhere. In the future, this could allow robots to charge into the woods on an emergency response mission or climb a flight of stairs to deliver medicine to an elderly person locked in.

Margolis wrote the article with lead author Pulkit Agrawal, who heads the unlikely AI lab at MIT and is the Steven G. and Renee Finn Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science; Professor Sangbae Kim from the Department of Mechanical Engineering at MIT; and other graduate students Tao Chen and Xiang Fu at MIT. Other co-authors include Kartik Paigwar, a graduate student from Arizona State University; and Donghyun Kim, assistant professor at the University of Massachusetts at Amherst. The work will be presented next month at the Robotics Learning Conference.

Everything is under control

The use of two separate controllers working together makes this system particularly innovative.

A controller is an algorithm that will convert the state of the robot into a set of actions to follow. Many blind controllers – those that don’t incorporate vision – are tough and efficient but only allow robots to walk on continuous terrain.

Vision is such a complex sensory input to process that these algorithms are unable to deal with it effectively. Systems that incorporate vision typically rely on a “height map” of the terrain, which must either be pre-built or generated on the fly, a process that is generally slow and likely to fail if the height map is incorrect.

To develop their system, the researchers took the best parts of these rugged blind controllers and combined them with a separate module that handles real-time vision.

The robot’s camera captures depth images of the upcoming terrain, which are transmitted to a high-level controller along with information about the state of the robot’s body (joint angles, body orientation, etc.). The high level controller is a neural network that “learns” from experience.

This neural network generates a target trajectory, which the second controller uses to generate torques for each of the robot’s 12 joints. This low-level controller is not a neural network and instead relies on a set of concise physical equations that describe the robot’s movement.

“The hierarchy, including the use of this low-level controller, allows us to constrain the behavior of the robot to behave better. With this low-level controller, we use well-specified models on which we can impose constraints, which is not usually possible in a learning-based network, ”says Margolis.

Teach the network

The researchers used the trial and error method known as reinforcement learning to train the high-level controller. They performed simulations of the running robot over hundreds of different discontinuous terrains and rewarded him for successful crossings.

Over time, the algorithm learned which actions maximized the reward.

Then they built a physical lot with a set of wooden planks and put their control system to the test using the mini cheetah.

“It was really fun working with a robot designed in-house at MIT by some of our staff. The mini cheetah is a great platform because it’s modular and made mostly from parts that you can order online, so if we wanted a new battery or camera, we just had to order it from a regular supplier and, with a little help from the Sangbae laboratory to install it, ”explains Margolis.

Estimating the state of the robot has proven to be a challenge in some cases. Unlike simulation, real-world sensors encounter noise which can build up and affect the result. So, for some experiments involving high-precision foot placement, the researchers used a motion capture system to measure the robot’s actual position.

Their system outperformed others that only use a single controller, and the mini cheetah successfully traversed 90% of the grounds.

“A novelty of our system is that it adjusts the robot’s gait. If a human were to try to jump over a really wide gap, he could start by running really fast to gain speed, then he could put both feet together to make a really powerful leap across the gap. In the same way, our robot can adjust the times and the duration of its contact with the feet to better cross the ground ”, explains Margolis.

Jumping out of the lab

While the researchers have been able to demonstrate that their control system works in the lab, they still have a long way to go before they can deploy the system in the real world, says Margolis.

In the future, they hope to mount a more powerful computer on the robot so that it can do all of its calculations on board. They also want to improve the robot state estimator to eliminate the need for the motion capture system. Besides, they would like to improve the low level controller so that it can exploit the full range of motion of the robot, and improve the high level controller so that it works well in different lighting conditions.

“It is remarkable to see the flexibility of machine learning techniques able to bypass carefully designed intermediate processes (eg state estimation and trajectory planning) upon which age-old model-based techniques have relied. “Kim said. “I am excited about the future of mobile robots with more robust vision processing specially trained for locomotion. “

The research is supported, in part, by MIT’s unlikely AI lab, the Biomimetic Robotics Lab, NAVER LABS, and the DARPA Machine Common Sense program.


Share.

Leave A Reply