2 research outputs found
Enabling Intuitive Human-Robot Teaming Using Augmented Reality and Gesture Control
Human-robot teaming offers great potential because of the opportunities to
combine strengths of heterogeneous agents. However, one of the critical
challenges in realizing an effective human-robot team is efficient information
exchange - both from the human to the robot as well as from the robot to the
human. In this work, we present and analyze an augmented reality-enabled,
gesture-based system that supports intuitive human-robot teaming through
improved information exchange. Our proposed system requires no external
instrumentation aside from human-wearable devices and shows promise of
real-world applicability for service-oriented missions. Additionally, we
present preliminary results from a pilot study with human participants, and
highlight lessons learned and open research questions that may help direct
future development, fielding, and experimentation of autonomous HRI systems.Comment: Proceedings of the Artificial Intelligence for Human-Robot
Interaction AAAI Symposium Series (AI-HRI 2019
Learning User-Preferred Mappings for Intuitive Robot Control
When humans control drones, cars, and robots, we often have some preconceived
notion of how our inputs should make the system behave. Existing approaches to
teleoperation typically assume a one-size-fits-all approach, where the
designers pre-define a mapping between human inputs and robot actions, and
every user must adapt to this mapping over repeated interactions. Instead, we
propose a personalized method for learning the human's preferred or
preconceived mapping from a few robot queries. Given a robot controller, we
identify an alignment model that transforms the human's inputs so that the
controller's output matches their expectations. We make this approach
data-efficient by recognizing that human mappings have strong priors: we expect
the input space to be proportional, reversable, and consistent. Incorporating
these priors ensures that the robot learns an intuitive mapping from few
examples. We test our learning approach in robot manipulation tasks inspired by
assistive settings, where each user has different personal preferences and
physical capabilities for teleoperating the robot arm. Our simulated and
experimental results suggest that learning the mapping between inputs and robot
actions improves objective and subjective performance when compared to manually
defined alignments or learned alignments without intuitive priors. The
supplementary video showing these user studies can be found at:
https://youtu.be/rKHka0_48-Q.Comment: 8 pages, 7 figures, Proceedings of the IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS), October 202