34,597 research outputs found
One-Shot Learning of Manipulation Skills with Online Dynamics Adaptation and Neural Network Priors
One of the key challenges in applying reinforcement learning to complex
robotic control tasks is the need to gather large amounts of experience in
order to find an effective policy for the task at hand. Model-based
reinforcement learning can achieve good sample efficiency, but requires the
ability to learn a model of the dynamics that is good enough to learn an
effective policy. In this work, we develop a model-based reinforcement learning
algorithm that combines prior knowledge from previous tasks with online
adaptation of the dynamics model. These two ingredients enable highly
sample-efficient learning even in regimes where estimating the true dynamics is
very difficult, since the online model adaptation allows the method to locally
compensate for unmodeled variation in the dynamics. We encode the prior
experience into a neural network dynamics model, adapt it online by
progressively refitting a local linear model of the dynamics, and use model
predictive control to plan under these dynamics. Our experimental results show
that this approach can be used to solve a variety of complex robotic
manipulation tasks in just a single attempt, using prior data from other
manipulation behaviors
Propagation Networks for Model-Based Control Under Partial Observation
There has been an increasing interest in learning dynamics simulators for
model-based control. Compared with off-the-shelf physics engines, a learnable
simulator can quickly adapt to unseen objects, scenes, and tasks. However,
existing models like interaction networks only work for fully observable
systems; they also only consider pairwise interactions within a single time
step, both restricting their use in practical systems. We introduce Propagation
Networks (PropNet), a differentiable, learnable dynamics model that handles
partially observable scenarios and enables instantaneous propagation of signals
beyond pairwise interactions. Experiments show that our propagation networks
not only outperform current learnable physics engines in forward simulation,
but also achieve superior performance on various control tasks. Compared with
existing model-free deep reinforcement learning algorithms, model-based control
with propagation networks is more accurate, efficient, and generalizable to
new, partially observable scenes and tasks.Comment: Accepted to ICRA 2019. Project Page: http://propnet.csail.mit.edu
Video: https://youtu.be/ZAxHXegkz4
Combining Physical Simulators and Object-Based Networks for Control
Physics engines play an important role in robot planning and control;
however, many real-world control problems involve complex contact dynamics that
cannot be characterized analytically. Most physics engines therefore employ .
approximations that lead to a loss in precision. In this paper, we propose a
hybrid dynamics model, simulator-augmented interaction networks (SAIN),
combining a physics engine with an object-based neural network for dynamics
modeling. Compared with existing models that are purely analytical or purely
data-driven, our hybrid model captures the dynamics of interacting objects in a
more accurate and data-efficient manner.Experiments both in simulation and on a
real robot suggest that it also leads to better performance when used in
complex control tasks. Finally, we show that our model generalizes to novel
environments with varying object shapes and materials.Comment: ICRA 2019; Project page: http://sain.csail.mit.ed
- …