599 research outputs found
Overview of some Command Modes for Human-Robot Interaction Systems
Interaction and command modes as well as their combination are essential features of modern and futuristic robotic systems interacting with human beings in various dynamical environments. This paper presents a synthetic overview concerning the most command modes used in Human-Robot Interaction Systems (HRIS). It includes the first historical command modes which are namely tele-manipulation, off-line robot programming, and traditional elementary teaching by demonstration. It then introduces the most recent command modes which have been fostered later on by the use of artificial intelligence techniques implemented on more powerful computers. In this context, we will consider specifically the following modes: interactive programming based on the graphical-user-interfaces, voice-based, pointing-on-image-based, gesture-based, and finally brain-based commands.info:eu-repo/semantics/publishedVersio
Bootstrapping Robotic Skill Learning With Intuitive Teleoperation: Initial Feasibility Study
Robotic skill learning has been increasingly studied but the demonstration
collections are more challenging compared to collecting images/videos in
computer vision and texts in natural language processing. This paper presents a
skill learning paradigm by using intuitive teleoperation devices to generate
high-quality human demonstrations efficiently for robotic skill learning in a
data-driven manner. By using a reliable teleoperation interface, the da Vinci
Research Kit (dVRK) master, a system called dVRK-Simulator-for-Demonstration
(dS4D) is proposed in this paper. Various manipulation tasks show the system's
effectiveness and advantages in efficiency compared to other interfaces. Using
the collected data for policy learning has been investigated, which verifies
the initial feasibility. We believe the proposed paradigm can facilitate robot
learning driven by high-quality demonstrations and efficiency while generating
them.Comment: 10 pages, 4 figures, accepted by ISER202
GELLO: A General, Low-Cost, and Intuitive Teleoperation Framework for Robot Manipulators
Imitation learning from human demonstrations is a powerful framework to teach
robots new skills. However, the performance of the learned policies is
bottlenecked by the quality, scale, and variety of the demonstration data. In
this paper, we aim to lower the barrier to collecting large and high-quality
human demonstration data by proposing GELLO, a general framework for building
low-cost and intuitive teleoperation systems for robotic manipulation. Given a
target robot arm, we build a GELLO controller that has the same kinematic
structure as the target arm, leveraging 3D-printed parts and off-the-shelf
motors. GELLO is easy to build and intuitive to use. Through an extensive user
study, we show that GELLO enables more reliable and efficient demonstration
collection compared to commonly used teleoperation devices in the imitation
learning literature such as VR controllers and 3D spacemouses. We further
demonstrate the capabilities of GELLO for performing complex bi-manual and
contact-rich manipulation tasks. To make GELLO accessible to everyone, we have
designed and built GELLO systems for 3 commonly used robotic arms: Franka, UR5,
and xArm. All software and hardware are open-sourced and can be found on our
website: https://wuphilipp.github.io/gello/
Deep Imitation Learning for Humanoid Loco-manipulation through Human Teleoperation
We tackle the problem of developing humanoid loco-manipulation skills with
deep imitation learning. The difficulty of collecting task demonstrations and
training policies for humanoids with a high degree of freedom presents
substantial challenges. We introduce TRILL, a data-efficient framework for
training humanoid loco-manipulation policies from human demonstrations. In this
framework, we collect human demonstration data through an intuitive Virtual
Reality (VR) interface. We employ the whole-body control formulation to
transform task-space commands by human operators into the robot's joint-torque
actuation while stabilizing its dynamics. By employing high-level action
abstractions tailored for humanoid loco-manipulation, our method can
efficiently learn complex sensorimotor skills. We demonstrate the effectiveness
of TRILL in simulation and on a real-world robot for performing various
loco-manipulation tasks. Videos and additional materials can be found on the
project page: https://ut-austin-rpl.github.io/TRILL.Comment: Submitted to Humanoids 202
- …