9,829 research outputs found
Collective Robot Reinforcement Learning with Distributed Asynchronous Guided Policy Search
In principle, reinforcement learning and policy search methods can enable
robots to learn highly complex and general skills that may allow them to
function amid the complexity and diversity of the real world. However, training
a policy that generalizes well across a wide range of real-world conditions
requires far greater quantity and diversity of experience than is practical to
collect with a single robot. Fortunately, it is possible for multiple robots to
share their experience with one another, and thereby, learn a policy
collectively. In this work, we explore distributed and asynchronous policy
learning as a means to achieve generalization and improved training times on
challenging, real-world manipulation tasks. We propose a distributed and
asynchronous version of Guided Policy Search and use it to demonstrate
collective policy learning on a vision-based door opening task using four
robots. We show that it achieves better generalization, utilization, and
training times than the single robot alternative.Comment: Submitted to the IEEE International Conference on Robotics and
Automation 201
A Discrete Geometric Optimal Control Framework for Systems with Symmetries
This paper studies the optimal motion control of
mechanical systems through a discrete geometric approach. At
the core of our formulation is a discrete Lagrange-d’Alembert-
Pontryagin variational principle, from which are derived discrete
equations of motion that serve as constraints in our optimization
framework. We apply this discrete mechanical approach to
holonomic systems with symmetries and, as a result, geometric
structure and motion invariants are preserved. We illustrate our
method by computing optimal trajectories for a simple model of
an air vehicle flying through a digital terrain elevation map, and
point out some of the numerical benefits that ensue
- …