87,373 research outputs found
Efficient Model Learning for Human-Robot Collaborative Tasks
We present a framework for learning human user models from joint-action
demonstrations that enables the robot to compute a robust policy for a
collaborative task with a human. The learning takes place completely
automatically, without any human intervention. First, we describe the
clustering of demonstrated action sequences into different human types using an
unsupervised learning algorithm. These demonstrated sequences are also used by
the robot to learn a reward function that is representative for each type,
through the employment of an inverse reinforcement learning algorithm. The
learned model is then used as part of a Mixed Observability Markov Decision
Process formulation, wherein the human type is a partially observable variable.
With this framework, we can infer, either offline or online, the human type of
a new user that was not included in the training set, and can compute a policy
for the robot that will be aligned to the preference of this new user and will
be robust to deviations of the human actions from prior demonstrations. Finally
we validate the approach using data collected in human subject experiments, and
conduct proof-of-concept demonstrations in which a person performs a
collaborative task with a small industrial robot
Prediction of intent in robotics and multi-agent systems.
Moving beyond the stimulus contained in observable agent behaviour, i.e. understanding the underlying intent of the observed agent is of immense interest in a variety of domains that involve collaborative and competitive scenarios, for example assistive robotics, computer games, robot-human interaction, decision support and intelligent tutoring. This review paper examines approaches for performing action recognition and prediction of intent from a multi-disciplinary perspective, in both single robot and multi-agent scenarios, and analyses the underlying challenges, focusing mainly on generative approaches
An assisted navigation method for telepresence robots
Telepresence robots have emerged as a new means of interaction in remote
environments. However, the use of such robots is still limited due to safety
and usability issues when operating in human-like environments. This work addresses
these issues by enhancing the robot navigation through a collaborative
control method that assists the user to negotiate obstacles. The method has been
implemented in a commercial telepresence robot and a user study has been conducted
in order to test the suitability of our approach.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Fast human motion prediction for human-robot collaboration with wearable interfaces
In this paper, we aim at improving human motion prediction during human-robot
collaboration in industrial facilities by exploiting contributions from both
physical and physiological signals. Improved human-machine collaboration could
prove useful in several areas, while it is crucial for interacting robots to
understand human movement as soon as possible to avoid accidents and injuries.
In this perspective, we propose a novel human-robot interface capable to
anticipate the user intention while performing reaching movements on a working
bench in order to plan the action of a collaborative robot. The proposed
interface can find many applications in the Industry 4.0 framework, where
autonomous and collaborative robots will be an essential part of innovative
facilities. A motion intention prediction and a motion direction prediction
levels have been developed to improve detection speed and accuracy. A Gaussian
Mixture Model (GMM) has been trained with IMU and EMG data following an
evidence accumulation approach to predict reaching direction. Novel dynamic
stopping criteria have been proposed to flexibly adjust the trade-off between
early anticipation and accuracy according to the application. The output of the
two predictors has been used as external inputs to a Finite State Machine (FSM)
to control the behaviour of a physical robot according to user's action or
inaction. Results show that our system outperforms previous methods, achieving
a real-time classification accuracy of after
from movement onset
- …
