2,699 research outputs found
From Human Physical Interaction To Online Motion Adaptation Using Parameterized Dynamical Systems
In this work, we present an adaptive motion planning approach for impedance-controlled robots to modify their tasks based on human physical interactions. We use a class of parameterized time-independent dynamical systems for motion generation where the modulation of such parameters allows for motion flexibility. To adapt to human interactions, we update the parameter of our dynamical system in order to reduce the tracking error (i.e., between the desired trajectory generated by the dynamical system and the real trajectory influenced by the human interaction). We provide analytical analysis and several simulations of our method. Finally, we investigate our approach through real world experiments with 7-DOF KUKA LWR 4+ robot performing tasks such as polishing and pick-and-place
Learning Human-Robot Collaboration Insights through the Integration of Muscle Activity in Interaction Motion Models
Recent progress in human-robot collaboration makes fast and fluid
interactions possible, even when human observations are partial and occluded.
Methods like Interaction Probabilistic Movement Primitives (ProMP) model human
trajectories through motion capture systems. However, such representation does
not properly model tasks where similar motions handle different objects. Under
current approaches, a robot would not adapt its pose and dynamics for proper
handling. We integrate the use of Electromyography (EMG) into the Interaction
ProMP framework and utilize muscular signals to augment the human observation
representation. The contribution of our paper is increased task discernment
when trajectories are similar but tools are different and require the robot to
adjust its pose for proper handling. Interaction ProMPs are used with an
augmented vector that integrates muscle activity. Augmented time-normalized
trajectories are used in training to learn correlation parameters and robot
motions are predicted by finding the best weight combination and temporal
scaling for a task. Collaborative single task scenarios with similar motions
but different objects were used and compared. For one experiment only joint
angles were recorded, for the other EMG signals were additionally integrated.
Task recognition was computed for both tasks. Observation state vectors with
augmented EMG signals were able to completely identify differences across
tasks, while the baseline method failed every time. Integrating EMG signals
into collaborative tasks significantly increases the ability of the system to
recognize nuances in the tasks that are otherwise imperceptible, up to 74.6% in
our studies. Furthermore, the integration of EMG signals for collaboration also
opens the door to a wide class of human-robot physical interactions based on
haptic communication that has been largely unexploited in the field.Comment: 7 pages, 2 figures, 2 tables. As submitted to Humanoids 201
Evolutionary Robotics: a new scientific tool for studying cognition
We survey developments in Artificial Neural Networks, in Behaviour-based Robotics and Evolutionary Algorithms that set the stage for Evolutionary Robotics in the 1990s. We examine the motivations for using ER as a scientific tool for studying minimal models of cognition, with the advantage of being capable of generating integrated sensorimotor systems with minimal (or controllable) prejudices. These systems must act as a whole in close coupling with their environments which is an essential aspect of real cognition that is often either bypassed or modelled poorly in other disciplines. We demonstrate with three example studies: homeostasis under visual inversion; the origins of learning; and the ontogenetic acquisition of entrainment
Using Parameterized Black-Box Priors to Scale Up Model-Based Policy Search for Robotics
The most data-efficient algorithms for reinforcement learning in robotics are
model-based policy search algorithms, which alternate between learning a
dynamical model of the robot and optimizing a policy to maximize the expected
return given the model and its uncertainties. Among the few proposed
approaches, the recently introduced Black-DROPS algorithm exploits a black-box
optimization algorithm to achieve both high data-efficiency and good
computation times when several cores are used; nevertheless, like all
model-based policy search approaches, Black-DROPS does not scale to high
dimensional state/action spaces. In this paper, we introduce a new model
learning procedure in Black-DROPS that leverages parameterized black-box priors
to (1) scale up to high-dimensional systems, and (2) be robust to large
inaccuracies of the prior information. We demonstrate the effectiveness of our
approach with the "pendubot" swing-up task in simulation and with a physical
hexapod robot (48D state space, 18D action space) that has to walk forward as
fast as possible. The results show that our new algorithm is more
data-efficient than previous model-based policy search algorithms (with and
without priors) and that it can allow a physical 6-legged robot to learn new
gaits in only 16 to 30 seconds of interaction time.Comment: Accepted at ICRA 2018; 8 pages, 4 figures, 2 algorithms, 1 table;
Video at https://youtu.be/HFkZkhGGzTo ; Spotlight ICRA presentation at
https://youtu.be/_MZYDhfWeL
Task Generalization with Stability Guarantees via Elastic Dynamical System Motion Policies
Dynamical System (DS) based Learning from Demonstration (LfD) allows learning
of reactive motion policies with stability and convergence guarantees from a
few trajectories. Yet, current DS learning techniques lack the flexibility to
generalize to new task instances as they ignore explicit task parameters that
inherently change the underlying trajectories. In this work, we propose
Elastic-DS, a novel DS learning, and generalization approach that embeds task
parameters into the Gaussian Mixture Model (GMM) based Linear Parameter Varying
(LPV) DS formulation. Central to our approach is the Elastic-GMM, a GMM
constrained to SE(3) task-relevant frames. Given a new task instance/context,
the Elastic-GMM is transformed with Laplacian Editing and used to re-estimate
the LPV-DS policy. Elastic-DS is compositional in nature and can be used to
construct flexible multi-step tasks. We showcase its strength on a myriad of
simulated and real-robot experiments while preserving desirable
control-theoretic guarantees. Supplementary videos can be found at
https://sites.google.com/view/elastic-dsComment: Accepted to CoRL 202
Learning Task Priorities from Demonstrations
Bimanual operations in humanoids offer the possibility to carry out more than
one manipulation task at the same time, which in turn introduces the problem of
task prioritization. We address this problem from a learning from demonstration
perspective, by extending the Task-Parameterized Gaussian Mixture Model
(TP-GMM) to Jacobian and null space structures. The proposed approach is tested
on bimanual skills but can be applied in any scenario where the prioritization
between potentially conflicting tasks needs to be learned. We evaluate the
proposed framework in: two different tasks with humanoids requiring the
learning of priorities and a loco-manipulation scenario, showing that the
approach can be exploited to learn the prioritization of multiple tasks in
parallel.Comment: Accepted for publication at the IEEE Transactions on Robotic
Learning Barrier Functions for Constrained Motion Planning with Dynamical Systems
Stable dynamical systems are a flexible tool to plan robotic motions in
real-time. In the robotic literature, dynamical system motions are typically
planned without considering possible limitations in the robot's workspace. This
work presents a novel approach to learn workspace constraints from human
demonstrations and to generate motion trajectories for the robot that lie in
the constrained workspace. Training data are incrementally clustered into
different linear subspaces and used to fit a low dimensional representation of
each subspace. By considering the learned constraint subspaces as zeroing
barrier functions, we are able to design a control input that keeps the system
trajectory within the learned bounds. This control input is effectively
combined with the original system dynamics preserving eventual asymptotic
properties of the unconstrained system. Simulations and experiments on a real
robot show the effectiveness of the proposed approach
One-Shot Learning of Manipulation Skills with Online Dynamics Adaptation and Neural Network Priors
One of the key challenges in applying reinforcement learning to complex
robotic control tasks is the need to gather large amounts of experience in
order to find an effective policy for the task at hand. Model-based
reinforcement learning can achieve good sample efficiency, but requires the
ability to learn a model of the dynamics that is good enough to learn an
effective policy. In this work, we develop a model-based reinforcement learning
algorithm that combines prior knowledge from previous tasks with online
adaptation of the dynamics model. These two ingredients enable highly
sample-efficient learning even in regimes where estimating the true dynamics is
very difficult, since the online model adaptation allows the method to locally
compensate for unmodeled variation in the dynamics. We encode the prior
experience into a neural network dynamics model, adapt it online by
progressively refitting a local linear model of the dynamics, and use model
predictive control to plan under these dynamics. Our experimental results show
that this approach can be used to solve a variety of complex robotic
manipulation tasks in just a single attempt, using prior data from other
manipulation behaviors
- …