4,065 research outputs found
Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning
Reinforcement learning (RL) algorithms for real-world robotic applications
need a data-efficient learning process and the ability to handle complex,
unknown dynamical systems. These requirements are handled well by model-based
and model-free RL approaches, respectively. In this work, we aim to combine the
advantages of these two types of methods in a principled manner. By focusing on
time-varying linear-Gaussian policies, we enable a model-based algorithm based
on the linear quadratic regulator (LQR) that can be integrated into the
model-free framework of path integral policy improvement (PI2). We can further
combine our method with guided policy search (GPS) to train arbitrary
parameterized policies such as deep neural networks. Our simulation and
real-world experiments demonstrate that this method can solve challenging
manipulation tasks with comparable or better performance than model-free
methods while maintaining the sample efficiency of model-based methods. A video
presenting our results is available at
https://sites.google.com/site/icml17pilqrComment: Paper accepted to the International Conference on Machine Learning
(ICML) 201
State-regularized policy search for linearized dynamical systems
Trajectory-Centric Reinforcement Learning and Trajectory
Optimization methods optimize a sequence of feedbackcontrollers
by taking advantage of local approximations of
model dynamics and cost functions. Stability of the policy update
is a major issue for these methods, rendering them hard
to apply for highly nonlinear systems. Recent approaches
combine classical Stochastic Optimal Control methods with
information-theoretic bounds to control the step-size of the
policy update and could even be used to train nonlinear deep
control policies. These methods bound the relative entropy
between the new and the old policy to ensure a stable policy
update. However, despite the bound in policy space, the
state distributions of two consecutive policies can still differ
significantly, rendering the used local approximate models invalid.
To alleviate this issue we propose enforcing a relative
entropy constraint not only on the policy update, but also on
the update of the state distribution, around which the dynamics
and cost are being approximated. We present a derivation
of the closed-form policy update and show that our approach
outperforms related methods on two nonlinear and highly dynamic
simulated systems
Collective Robot Reinforcement Learning with Distributed Asynchronous Guided Policy Search
In principle, reinforcement learning and policy search methods can enable
robots to learn highly complex and general skills that may allow them to
function amid the complexity and diversity of the real world. However, training
a policy that generalizes well across a wide range of real-world conditions
requires far greater quantity and diversity of experience than is practical to
collect with a single robot. Fortunately, it is possible for multiple robots to
share their experience with one another, and thereby, learn a policy
collectively. In this work, we explore distributed and asynchronous policy
learning as a means to achieve generalization and improved training times on
challenging, real-world manipulation tasks. We propose a distributed and
asynchronous version of Guided Policy Search and use it to demonstrate
collective policy learning on a vision-based door opening task using four
robots. We show that it achieves better generalization, utilization, and
training times than the single robot alternative.Comment: Submitted to the IEEE International Conference on Robotics and
Automation 201
Deep Object-Centric Representations for Generalizable Robot Learning
Robotic manipulation in complex open-world scenarios requires both reliable
physical manipulation skills and effective and generalizable perception. In
this paper, we propose a method where general purpose pretrained visual models
serve as an object-centric prior for the perception system of a learned policy.
We devise an object-level attentional mechanism that can be used to determine
relevant objects from a few trajectories or demonstrations, and then
immediately incorporate those objects into a learned policy. A task-independent
meta-attention locates possible objects in the scene, and a task-specific
attention identifies which objects are predictive of the trajectories. The
scope of the task-specific attention is easily adjusted by showing
demonstrations with distractor objects or with diverse relevant objects. Our
results indicate that this approach exhibits good generalization across object
instances using very few samples, and can be used to learn a variety of
manipulation tasks using reinforcement learning
- …