22 research outputs found
Learning Human-Robot Collaboration Insights through the Integration of Muscle Activity in Interaction Motion Models
Recent progress in human-robot collaboration makes fast and fluid
interactions possible, even when human observations are partial and occluded.
Methods like Interaction Probabilistic Movement Primitives (ProMP) model human
trajectories through motion capture systems. However, such representation does
not properly model tasks where similar motions handle different objects. Under
current approaches, a robot would not adapt its pose and dynamics for proper
handling. We integrate the use of Electromyography (EMG) into the Interaction
ProMP framework and utilize muscular signals to augment the human observation
representation. The contribution of our paper is increased task discernment
when trajectories are similar but tools are different and require the robot to
adjust its pose for proper handling. Interaction ProMPs are used with an
augmented vector that integrates muscle activity. Augmented time-normalized
trajectories are used in training to learn correlation parameters and robot
motions are predicted by finding the best weight combination and temporal
scaling for a task. Collaborative single task scenarios with similar motions
but different objects were used and compared. For one experiment only joint
angles were recorded, for the other EMG signals were additionally integrated.
Task recognition was computed for both tasks. Observation state vectors with
augmented EMG signals were able to completely identify differences across
tasks, while the baseline method failed every time. Integrating EMG signals
into collaborative tasks significantly increases the ability of the system to
recognize nuances in the tasks that are otherwise imperceptible, up to 74.6% in
our studies. Furthermore, the integration of EMG signals for collaboration also
opens the door to a wide class of human-robot physical interactions based on
haptic communication that has been largely unexploited in the field.Comment: 7 pages, 2 figures, 2 tables. As submitted to Humanoids 201
Active Learning of Probabilistic Movement Primitives
A Probabilistic Movement Primitive (ProMP) defines a distribution over
trajectories with an associated feedback policy. ProMPs are typically
initialized from human demonstrations and achieve task generalization through
probabilistic operations. However, there is currently no principled guidance in
the literature to determine how many demonstrations a teacher should provide
and what constitutes a "good'" demonstration for promoting generalization. In
this paper, we present an active learning approach to learning a library of
ProMPs capable of task generalization over a given space. We utilize
uncertainty sampling techniques to generate a task instance for which a teacher
should provide a demonstration. The provided demonstration is incorporated into
an existing ProMP if possible, or a new ProMP is created from the demonstration
if it is determined that it is too dissimilar from existing demonstrations. We
provide a qualitative comparison between common active learning metrics;
motivated by this comparison we present a novel uncertainty sampling approach
named "Greatest Mahalanobis Distance.'' We perform grasping experiments on a
real KUKA robot and show our novel active learning measure achieves better task
generalization with fewer demonstrations than a random sampling over the space.Comment: Under revie
Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction
Assistive robots can potentially improve the quality of life and personal
independence of elderly people by supporting everyday life activities. To
guarantee a safe and intuitive interaction between human and robot, human
intentions need to be recognized automatically. As humans communicate their
intentions multimodally, the use of multiple modalities for intention
recognition may not just increase the robustness against failure of individual
modalities but especially reduce the uncertainty about the intention to be
predicted. This is desirable as particularly in direct interaction between
robots and potentially vulnerable humans a minimal uncertainty about the
situation as well as knowledge about this actual uncertainty is necessary.
Thus, in contrast to existing methods, in this work a new approach for
multimodal intention recognition is introduced that focuses on uncertainty
reduction through classifier fusion. For the four considered modalities speech,
gestures, gaze directions and scene objects individual intention classifiers
are trained, all of which output a probability distribution over all possible
intentions. By combining these output distributions using the Bayesian method
Independent Opinion Pool the uncertainty about the intention to be recognized
can be decreased. The approach is evaluated in a collaborative human-robot
interaction task with a 7-DoF robot arm. The results show that fused
classifiers which combine multiple modalities outperform the respective
individual base classifiers with respect to increased accuracy, robustness, and
reduced uncertainty.Comment: Submitted to IROS 201