2,850 research outputs found
Learning Sensor Feedback Models from Demonstrations via Phase-Modulated Neural Networks
In order to robustly execute a task under environmental uncertainty, a robot
needs to be able to reactively adapt to changes arising in its environment. The
environment changes are usually reflected in deviation from expected sensory
traces. These deviations in sensory traces can be used to drive the motion
adaptation, and for this purpose, a feedback model is required. The feedback
model maps the deviations in sensory traces to the motion plan adaptation. In
this paper, we develop a general data-driven framework for learning a feedback
model from demonstrations. We utilize a variant of a radial basis function
network structure --with movement phases as kernel centers-- which can
generally be applied to represent any feedback models for movement primitives.
To demonstrate the effectiveness of our framework, we test it on the task of
scraping on a tilt board. In this task, we are learning a reactive policy in
the form of orientation adaptation, based on deviations of tactile sensor
traces. As a proof of concept of our method, we provide evaluations on an
anthropomorphic robot. A video demonstrating our approach and its results can
be seen in https://youtu.be/7Dx5imy1KcwComment: 8 pages, accepted to be published at the International Conference on
Robotics and Automation (ICRA) 201
Deep Reinforcement Learning with Feedback-based Exploration
Deep Reinforcement Learning has enabled the control of increasingly complex
and high-dimensional problems. However, the need of vast amounts of data before
reasonable performance is attained prevents its widespread application. We
employ binary corrective feedback as a general and intuitive manner to
incorporate human intuition and domain knowledge in model-free machine
learning. The uncertainty in the policy and the corrective feedback is combined
directly in the action space as probabilistic conditional exploration. As a
result, the greatest part of the otherwise ignorant learning process can be
avoided. We demonstrate the proposed method, Predictive Probabilistic Merging
of Policies (PPMP), in combination with DDPG. In experiments on continuous
control problems of the OpenAI Gym, we achieve drastic improvements in sample
efficiency, final performance, and robustness to erroneous feedback, both for
human and synthetic feedback. Additionally, we show solutions beyond the
demonstrated knowledge.Comment: 6 page
A New Data Source for Inverse Dynamics Learning
Modern robotics is gravitating toward increasingly collaborative human robot
interaction. Tools such as acceleration policies can naturally support the
realization of reactive, adaptive, and compliant robots. These tools require us
to model the system dynamics accurately -- a difficult task. The fundamental
problem remains that simulation and reality diverge--we do not know how to
accurately change a robot's state. Thus, recent research on improving inverse
dynamics models has been focused on making use of machine learning techniques.
Traditional learning techniques train on the actual realized accelerations,
instead of the policy's desired accelerations, which is an indirect data
source. Here we show how an additional training signal -- measured at the
desired accelerations -- can be derived from a feedback control signal. This
effectively creates a second data source for learning inverse dynamics models.
Furthermore, we show how both the traditional and this new data source, can be
used to train task-specific models of the inverse dynamics, when used
independently or combined. We analyze the use of both data sources in
simulation and demonstrate its effectiveness on a real-world robotic platform.
We show that our system incrementally improves the learned inverse dynamics
model, and when using both data sources combined converges more consistently
and faster.Comment: IROS 201
- …