Robots can pick themselves up after a fall, even in an unfamiliar environment, thanks to an artificially intelligent controller that can adapt to new scenarios. It could make four-legged robots more useful in responding to natural disasters, such as earthquakes.

Zhibin (Alex) Li at the University of Edinburgh, UK and his colleagues used an AI technique called deep reinforcement learning to teach four-legged robots a set of basic skills, such as trotting, steering and fall recovery. This involves the robots experimenting with different ways of moving and being rewarded with a numerical score for achieving a certain goal, such as standing up after a fall, and penalised for failing. This lets the AI recognise which actions are desired and repeat them in the similar situations in the future.

The team then tested the robots in a range of environments to see if they could combine these basic skill and quickly react to new scenarios. The robots could adapt their movements to stairs, slippery surfaces and gravel, despite not being programmed to navigate these complex environments. Li’s team also tested the robots’ ability to recover after being repeatedly pushed over, as shown in the video above.

“It is like how a human might know how to throw a ball and how to catch a ball, and then by experimenting with these together, they learn how to juggle,” says Edward Johns at Imperial College London, who was not involved in the research.

Advertisement


The robot’s autonomy could prove useful in emergency situations like earthquakes and fires. “We can deploy these robots to do the search and rescue for us when it is too dangerous for humans,” says Li.

Journal reference: Science Robotics, DOI: 10.1126/scirobotics.abb2174

More on these topics: