1 research outputs found
Adaptive Tensegrity Locomotion on Rough Terrain via Reinforcement Learning
The dynamical properties of tensegrity robots give them appealing ruggedness
and adaptability, but present major challenges with respect to locomotion
control. Due to high-dimensionality and complex contact responses, data-driven
approaches are apt for producing viable feedback policies. Guided Policy Search
(GPS), a sample-efficient and model-free hybrid framework for optimization and
reinforcement learning, has recently been used to produce periodic locomotion
for a spherical 6-bar tensegrity robot on flat or slightly varied surfaces.
This work provides an extension to non-periodic locomotion and achieves rough
terrain traversal, which requires more broadly varied, adaptive, and
non-periodic rover behavior. The contribution alters the control optimization
step of GPS, which locally fits and exploits surrogate models of the dynamics,
and employs the existing supervised learning step. The proposed solution
incorporates new processes to ensure effective local modeling despite the
disorganized nature of sample data in rough terrain locomotion. Demonstrations
in simulation reveal that the resulting controller sustains the highly adaptive
behavior necessary to reliably traverse rough terrain.Comment: submitted to ICRA 201