3,008 research outputs found
Advances in Human-Robot Handshaking
The use of social, anthropomorphic robots to support humans in various
industries has been on the rise. During Human-Robot Interaction (HRI),
physically interactive non-verbal behaviour is key for more natural
interactions. Handshaking is one such natural interaction used commonly in many
social contexts. It is one of the first non-verbal interactions which takes
place and should, therefore, be part of the repertoire of a social robot. In
this paper, we explore the existing state of Human-Robot Handshaking and
discuss possible ways forward for such physically interactive behaviours.Comment: Accepted at The 12th International Conference on Social Robotics
(ICSR 2020) 12 Pages, 1 Figur
MILD: Multimodal Interactive Latent Dynamics for Learning Human-Robot Interaction
Modeling interaction dynamics to generate robot trajectories that enable a
robot to adapt and react to a human's actions and intentions is critical for
efficient and effective collaborative Human-Robot Interactions (HRI). Learning
from Demonstration (LfD) methods from Human-Human Interactions (HHI) have shown
promising results, especially when coupled with representation learning
techniques. However, such methods for learning HRI either do not scale well to
high dimensional data or cannot accurately adapt to changing via-poses of the
interacting partner. We propose Multimodal Interactive Latent Dynamics (MILD),
a method that couples deep representation learning and probabilistic machine
learning to address the problem of two-party physical HRIs. We learn the
interaction dynamics from demonstrations, using Hidden Semi-Markov Models
(HSMMs) to model the joint distribution of the interacting agents in the latent
space of a Variational Autoencoder (VAE). Our experimental evaluations for
learning HRI from HHI demonstrations show that MILD effectively captures the
multimodality in the latent representations of HRI tasks, allowing us to decode
the varying dynamics occurring in such tasks. Compared to related work, MILD
generates more accurate trajectories for the controlled agent (robot) when
conditioned on the observed agent's (human) trajectory. Notably, MILD can learn
directly from camera-based pose estimations to generate trajectories, which we
then map to a humanoid robot without the need for any additional training.Comment: Accepted at the IEEE-RAS International Conference on Humanoid Robots
(Humanoids) 202
- …