Abstract

We investigated whether deep reinforcement learning (deep RL) is able to synthesize sophisticated and safe movement skills for a low-cost, miniature humanoid robot that can be composed into complex behavioral strategies. We used deep RL to train a humanoid robot to play a simplified one-versus-one soccer game. The resulting agent exhibits robust and dynamic movement skills, such as rapid fall recovery, walking, turning, and kicking, and it transitions between them in a smooth and efficient manner. It also learned to anticipate ball movements and block opponent shots. The agent's tactical behavior adapts to specific game contexts in a way that would be impractical to manually design. Our agent was trained in simulation and transferred to real robots zero-shot. A combination of sufficiently high-frequency control, targeted dynamics randomization, and perturbations during training enabled good-quality transfer. In experiments, the agent walked 181% faster, turned 302% faster, took 63% less time to get up, and kicked a ball 34% faster than a scripted baseline.

Download full-text PDF

Link Source
Download Source 1https://www.science.org/doi/10.1126/scirobotics.adi8022Web Search
Download Source 2http://dx.doi.org/10.1126/scirobotics.adi8022DOI Listing

Publication Analysis

Top Keywords

deep reinforcement
8
reinforcement learning
8
movement skills
8
humanoid robot
8
learning agile
4
agile soccer
4
soccer skills
4
skills bipedal
4
bipedal robot
4
deep
4