457 research outputs found
Learning assistive teleoperation behaviors from demonstration
Emergency response in hostile environments often involves remotely operated vehicles (ROVs) that are teleoperated as interaction with the environment is typically required. Many ROV tasks are common to such scenarios and are often recurrent. We show how a probabilistic approach can be used to learn a task behavior model from data. Such a model can then be used to assist an operator performing the same task in future missions. We show how this approach can capture behaviors (constraints) that are present in the training data, and how this model can be combined with the operator’s input online. We present an illustrative planar example and elaborate with a non-Destructive testing (NDT) scanning task on a teleoperation mock-up using a two-armed Baxter robot. We demonstrate how our approach can learn from examples task specific behaviors and automatically control the overall system, combining the operator’s input and the learned model online, in an assistive teleoperation manner. This can potentially reduce the time and effort required to perform teleoperation tasks that are commonplace to ROV missions in the context of security, maintenance and rescue robotics
PATO: Policy Assisted TeleOperation for Scalable Robot Data Collection
Large-scale data is an essential component of machine learning as
demonstrated in recent advances in natural language processing and computer
vision research. However, collecting large-scale robotic data is much more
expensive and slower as each operator can control only a single robot at a
time. To make this costly data collection process efficient and scalable, we
propose Policy Assisted TeleOperation (PATO), a system which automates part of
the demonstration collection process using a learned assistive policy. PATO
autonomously executes repetitive behaviors in data collection and asks for
human input only when it is uncertain about which subtask or behavior to
execute. We conduct teleoperation user studies both with a real robot and a
simulated robot fleet and demonstrate that our assisted teleoperation system
reduces human operators' mental load while improving data collection
efficiency. Further, it enables a single operator to control multiple robots in
parallel, which is a first step towards scalable robotic data collection. For
code and video results, see https://clvrai.com/patoComment: Website: https://clvrai.com/pat
Autonomy Infused Teleoperation with Application to BCI Manipulation
Robot teleoperation systems face a common set of challenges including
latency, low-dimensional user commands, and asymmetric control inputs. User
control with Brain-Computer Interfaces (BCIs) exacerbates these problems
through especially noisy and erratic low-dimensional motion commands due to the
difficulty in decoding neural activity. We introduce a general framework to
address these challenges through a combination of computer vision, user intent
inference, and arbitration between the human input and autonomous control
schemes. Adjustable levels of assistance allow the system to balance the
operator's capabilities and feelings of comfort and control while compensating
for a task's difficulty. We present experimental results demonstrating
significant performance improvement using the shared-control assistance
framework on adapted rehabilitation benchmarks with two subjects implanted with
intracortical brain-computer interfaces controlling a seven degree-of-freedom
robotic manipulator as a prosthetic. Our results further indicate that shared
assistance mitigates perceived user difficulty and even enables successful
performance on previously infeasible tasks. We showcase the extensibility of
our architecture with applications to quality-of-life tasks such as opening a
door, pouring liquids from containers, and manipulation with novel objects in
densely cluttered environments
Conducting neuropsychological tests with a humanoid robot: design and evaluation
International audience— Socially assistive robot with interactive behavioral capability have been improving quality of life for a wide range of users by taking care of elderlies, training individuals with cognitive disabilities or physical rehabilitation, etc. While the interactive behavioral policies of most systems are scripted, we discuss here key features of a new methodology that enables professional caregivers to teach a socially assistive robot (SAR) how to perform the assistive tasks while giving proper instructions, demonstrations and feedbacks. We describe here how socio-communicative gesture controllers – which actually control the speech, the facial displays and hand gestures of our iCub robot – are driven by multimodal events captured on a professional human demonstrator performing a neuropsychological interview. Furthermore, we propose an original online evaluation method for rating the multimodal interactive behaviors of the SAR and show how such a method can help designers to identify the faulty events
The Assistive Multi-Armed Bandit
Learning preferences implicit in the choices humans make is a well studied
problem in both economics and computer science. However, most work makes the
assumption that humans are acting (noisily) optimally with respect to their
preferences. Such approaches can fail when people are themselves learning about
what they want. In this work, we introduce the assistive multi-armed bandit,
where a robot assists a human playing a bandit task to maximize cumulative
reward. In this problem, the human does not know the reward function but can
learn it through the rewards received from arm pulls; the robot only observes
which arms the human pulls but not the reward associated with each pull. We
offer sufficient and necessary conditions for successfully assisting the human
in this framework. Surprisingly, better human performance in isolation does not
necessarily lead to better performance when assisted by the robot: a human
policy can do better by effectively communicating its observed rewards to the
robot. We conduct proof-of-concept experiments that support these results. We
see this work as contributing towards a theory behind algorithms for
human-robot interaction.Comment: Accepted to HRI 201
Learning to Assist Bimanual Teleoperation using Interval Type-2 Polynomial Fuzzy Inference
Assisting humans in collaborative tasks is a promising application for robots, however effective assistance remains challenging. In this paper, we propose a method for providing intuitive robotic assistance based on learning from human natural limb coordination. To encode coupling between multiple-limb motions, we use a novel interval type-2 (IT2) polynomial fuzzy inference for modeling trajectory adaptation. The associated polynomial coefficients are estimated using a modified recursive least-square with a dynamic forgetting factor. We propose to employ a Gaussian process to produce robust human motion predictions, and thus address the uncertainty and measurement noise of the system caused by interactive environments. Experimental results on two types of interaction tasks demonstrate the effectiveness of this approach, which achieves high accuracy in predicting assistive limb motion and enables humans to perform bimanual tasks using only one limb
- …