CTS: Concurrent Teacher-Student Reinforcement Learning for Legged Locomotion

1Hongxi Wang*, 1Haoxiang Luo*, 1,3Wei Zhang, 2,3Hua Chen
1Southern University of Science and Technology, 2Zhejiang University-University of Illinois Urbana-Champaign Institute, 3LimX Dynamics
*Indicates Equal Contribution

CTS enables legged robots of various sizes and configurations to achieve robust and agile locomotion across challenging real-world terrains, while also possessing exceptional capabilities to withstand strong external disturbances.

Abstract

In this paper, we propose CTS, a novel Concurrent Teacher-Student reinforcement learning architecture for legged locomotion over uneven terrains. Different from conventional teacher-student architecture that trains the teacher policy via RL first and then transfers the knowledge to the student policy through supervised learning, our proposed architecture trains teacher and student policy networks concurrently under the reinforcement learning paradigm. To this end, we develop a new training scheme based on a modified proximal policy gradient (PPO) method that exploits data samples collected from the interactions between both the teacher and the student policies with the environment. The effectiveness of the proposed architecture and the new training scheme is demonstrated through substantial quantitative simulation comparisons with the state-of-the-art approaches and extensive indoor and outdoor experiments with quadrupedal and point-foot bipedal robot platforms, showcasing robust and agile locomotion capability. Quantitative simulation comparisons show that our approach reduces the average velocity tracking error by up to 20% compared to the two-stage teacher-student, demonstrating significant superiority in addressing blind locomotion tasks.

Training Pipeline

The teacher and student policies are trained concurrently using PPO within an asymmetric actor-critic framework. Agents in both groups share the same critic and policy network, with actions determined by observations and latent representations from either privileged or proprioceptive encoder. The privileged encoder is trained via policy gradient, while the proprioceptive encoder undergoes supervised learning to minimize reconstruction loss.

Quadrupeds Experiments

Biped Experiments

BibTeX

BibTex Code Here