1,914 research outputs found
Autonomy Infused Teleoperation with Application to BCI Manipulation
Robot teleoperation systems face a common set of challenges including
latency, low-dimensional user commands, and asymmetric control inputs. User
control with Brain-Computer Interfaces (BCIs) exacerbates these problems
through especially noisy and erratic low-dimensional motion commands due to the
difficulty in decoding neural activity. We introduce a general framework to
address these challenges through a combination of computer vision, user intent
inference, and arbitration between the human input and autonomous control
schemes. Adjustable levels of assistance allow the system to balance the
operator's capabilities and feelings of comfort and control while compensating
for a task's difficulty. We present experimental results demonstrating
significant performance improvement using the shared-control assistance
framework on adapted rehabilitation benchmarks with two subjects implanted with
intracortical brain-computer interfaces controlling a seven degree-of-freedom
robotic manipulator as a prosthetic. Our results further indicate that shared
assistance mitigates perceived user difficulty and even enables successful
performance on previously infeasible tasks. We showcase the extensibility of
our architecture with applications to quality-of-life tasks such as opening a
door, pouring liquids from containers, and manipulation with novel objects in
densely cluttered environments
Interest of the dual hybrid control scheme for teleoperation with time delays
A new scheme of teleoperation called "dual hybrid control" is described. It is shown that telepresence is increased compared to traditional force feedback schemes. It is particulary well suited for time delay teleoperation
Intuitive Hand Teleoperation by Novice Operators Using a Continuous Teleoperation Subspace
Human-in-the-loop manipulation is useful in when autonomous grasping is not
able to deal sufficiently well with corner cases or cannot operate fast enough.
Using the teleoperator's hand as an input device can provide an intuitive
control method but requires mapping between pose spaces which may not be
similar. We propose a low-dimensional and continuous teleoperation subspace
which can be used as an intermediary for mapping between different hand pose
spaces. We present an algorithm to project between pose space and teleoperation
subspace. We use a non-anthropomorphic robot to experimentally prove that it is
possible for teleoperation subspaces to effectively and intuitively enable
teleoperation. In experiments, novice users completed pick and place tasks
significantly faster using teleoperation subspace mapping than they did using
state of the art teleoperation methods.Comment: ICRA 2018, 7 pages, 7 figures, 2 table
Graphics simulation and training aids for advanced teleoperation
Graphics displays can be of significant aid in accomplishing a teleoperation task throughout all three phases of off-line task analysis and planning, operator training, and online operation. In the first phase, graphics displays provide substantial aid to investigate work cell layout, motion planning with collision detection and with possible redundancy resolution, and planning for camera views. In the second phase, graphics displays can serve as very useful tools for introductory training of operators before training them on actual hardware. In the third phase, graphics displays can be used for previewing planned motions and monitoring actual motions in any desired viewing angle, or, when communication time delay prevails, for providing predictive graphics overlay on the actual camera view of the remote site to show the non-time-delayed consequences of commanded motions in real time. This paper addresses potential space applications of graphics displays in all three operational phases of advanced teleoperation. Possible applications are illustrated with techniques developed and demonstrated in the Advanced Teleoperation Laboratory at JPL. The examples described include task analysis and planning of a simulated Solar Maximum Satellite Repair task, a novel force-reflecting teleoperation simulator for operator training, and preview and predictive displays for on-line operations
RUR53: an Unmanned Ground Vehicle for Navigation, Recognition and Manipulation
This paper proposes RUR53: an Unmanned Ground Vehicle able to autonomously
navigate through, identify, and reach areas of interest; and there recognize,
localize, and manipulate work tools to perform complex manipulation tasks. The
proposed contribution includes a modular software architecture where each
module solves specific sub-tasks and that can be easily enlarged to satisfy new
requirements. Included indoor and outdoor tests demonstrate the capability of
the proposed system to autonomously detect a target object (a panel) and
precisely dock in front of it while avoiding obstacles. They show it can
autonomously recognize and manipulate target work tools (i.e., wrenches and
valve stems) to accomplish complex tasks (i.e., use a wrench to rotate a valve
stem). A specific case study is described where the proposed modular
architecture lets easy switch to a semi-teleoperated mode. The paper
exhaustively describes description of both the hardware and software setup of
RUR53, its performance when tests at the 2017 Mohamed Bin Zayed International
Robotics Challenge, and the lessons we learned when participating at this
competition, where we ranked third in the Gran Challenge in collaboration with
the Czech Technical University in Prague, the University of Pennsylvania, and
the University of Lincoln (UK).Comment: This article has been accepted for publication in Advanced Robotics,
published by Taylor & Franci
Supervised Autonomous Locomotion and Manipulation for Disaster Response with a Centaur-like Robot
Mobile manipulation tasks are one of the key challenges in the field of
search and rescue (SAR) robotics requiring robots with flexible locomotion and
manipulation abilities. Since the tasks are mostly unknown in advance, the
robot has to adapt to a wide variety of terrains and workspaces during a
mission. The centaur-like robot Centauro has a hybrid legged-wheeled base and
an anthropomorphic upper body to carry out complex tasks in environments too
dangerous for humans. Due to its high number of degrees of freedom, controlling
the robot with direct teleoperation approaches is challenging and exhausting.
Supervised autonomy approaches are promising to increase quality and speed of
control while keeping the flexibility to solve unknown tasks. We developed a
set of operator assistance functionalities with different levels of autonomy to
control the robot for challenging locomotion and manipulation tasks. The
integrated system was evaluated in disaster response scenarios and showed
promising performance.Comment: In Proceedings of IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), Madrid, Spain, October 201
An 8-DOF dual-arm system for advanced teleoperation performance experiments
This paper describes the electro-mechanical and control features of an 8-DOF manipulator manufactured by AAI Corporation and installed at the Jet Propulsion Lab. (JPL) in a dual-arm setting. The 8-DOF arm incorporates a variety of features not found in other lab or industrial manipulators. Some of the unique features are: 8-DOF revolute configuration with no lateral offsets at joint axes; 1 to 5 payload to weight ratio with 20 kg (44 lb) payload at a 1.75 m (68.5 in.) reach; joint position measurement with dual relative encoders and potentiometer; infinite roll of joint 8 with electrical and fiber optic slip rings; internal fiber optic link of 'smart' end effectors; four-axis wrist; graphite epoxy links; high link and joint stiffness; use of an upgraded JPL Universal Motor Controller (UMC) capable of driving up to 16 joints. The 8-DOF arm is equipped with a 'smart' end effector which incorporates a 6-DOF forcemoment sensor at the end effector base and grasp force sensors at the base of the parallel jaws. The 8-DOF arm is interfaced to a 6 DOF force reflecting hand controller. The same system is duplicated for and installed at NASA-Langley
An Open-Source 7-Axis, Robotic Platform to Enable Dexterous Procedures within CT Scanners
This paper describes the design, manufacture, and performance of a highly
dexterous, low-profile, 7 Degree-of-Freedom (DOF) robotic arm for CT-guided
percutaneous needle biopsy. Direct CT guidance allows physicians to localize
tumours quickly; however, needle insertion is still performed by hand. This
system is mounted to a fully active gantry superior to the patient's head and
teleoperated by a radiologist. Unlike other similar robots, this robot's fully
serial-link approach uses a unique combination of belt and cable drives for
high-transparency and minimal-backlash, allowing for an expansive working area
and numerous approach angles to targets all while maintaining a small in-bore
cross-section of less than . Simulations verified the system's
expansive collision free work-space and ability to hit targets across the
entire chest, as required for lung cancer biopsy. Targeting error is on average
on a teleoperated accuracy task, illustrating the system's sufficient
accuracy to perform biopsy procedures. The system is designed for lung biopsies
due to the large working volume that is required for reaching peripheral lung
lesions, though, with its large working volume and small in-bore
cross-sectional area, the robotic system is effectively a general-purpose
CT-compatible manipulation device for percutaneous procedures. Finally, with
the considerable development time undertaken in designing a precise and
flexible-use system and with the desire to reduce the burden of other
researchers in developing algorithms for image-guided surgery, this system
provides open-access, and to the best of our knowledge, is the first
open-hardware image-guided biopsy robot of its kind.Comment: 8 pages, 9 figures, final submission to IROS 201
- …