1,314 research outputs found
Learning to Navigate Cloth using Haptics
We present a controller that allows an arm-like manipulator to navigate
deformable cloth garments in simulation through the use of haptic information.
The main challenge of such a controller is to avoid getting tangled in, tearing
or punching through the deforming cloth. Our controller aggregates force
information from a number of haptic-sensing spheres all along the manipulator
for guidance. Based on haptic forces, each individual sphere updates its target
location, and the conflicts that arise between this set of desired positions is
resolved by solving an inverse kinematic problem with constraints.
Reinforcement learning is used to train the controller for a single
haptic-sensing sphere, where a training run is terminated (and thus penalized)
when large forces are detected due to contact between the sphere and a
simplified model of the cloth. In simulation, we demonstrate successful
navigation of a robotic arm through a variety of garments, including an
isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two
baseline controllers: one without haptics and another that was trained based on
large forces between the sphere and cloth, but without early termination.Comment: Supplementary video available at https://youtu.be/iHqwZPKVd4A.
Related publications http://www.cc.gatech.edu/~karenliu/Robotic_dressing.htm
Recommended from our members
Approaches to Safety in Inverse Reinforcement Learning
As the capabilities of robotic systems increase, we move closer to the vision of ubiquitous robotic assistance throughout our everyday lives. In transitioning robots and autonomous systems from traditional factory and industrial settings, it is critical that these systems are able to adapt to uncertain environments and the humans who populate them. In order to better understand and predict the behavior of these humans, Inverse Reinforcement Learning (IRL) uses demonstrations to infer the underlying motivations driving human actions. The information gained from IRL can be used to improve a robot’s understanding of the environment as well as to allow the robot to better interact with or assist humans.In this dissertation, we address the challenge of incorporating safety into the application of IRL. We first consider safety in the context of using IRL for assisting humans in shared control tasks. Through a user study, we show how incorporating haptic feedback into human assistance can increase humans’ sense of control while improving safety in the presence of imperfect learning. Further, we present our method for using IRL to automatically create such haptic feedback policies from task demonstrations.We further address safety in IRL by incorporating notions of safety directly into the learning process. Currently, most work on IRL focuses on learning explanatory rewards that humans are modeled as optimizing. However, pure reward optimization can fail to effectively capture hard requirements, such as safety constraints. We draw on the definition of safety from Hamilton-Jacobi reachability analysis to infer human perceptions of safety and to modify robot behavior to respect these learned safety constraints. We also extend this work on learning constraints by adapting the framework of Maximum Entropy IRL in order to learn hard constraints given nominal task rewards, and we show how this technique infers the most likely constraints to align expected behavior with observed demonstrations
Assisted Teleoperation in Changing Environments with a Mixture of Virtual Guides
Haptic guidance is a powerful technique to combine the strengths of humans
and autonomous systems for teleoperation. The autonomous system can provide
haptic cues to enable the operator to perform precise movements; the operator
can interfere with the plan of the autonomous system leveraging his/her
superior cognitive capabilities. However, providing haptic cues such that the
individual strengths are not impaired is challenging because low forces provide
little guidance, whereas strong forces can hinder the operator in realizing
his/her plan. Based on variational inference, we learn a Gaussian mixture model
(GMM) over trajectories to accomplish a given task. The learned GMM is used to
construct a potential field which determines the haptic cues. The potential
field smoothly changes during teleoperation based on our updated belief over
the plans and their respective phases. Furthermore, new plans are learned
online when the operator does not follow any of the proposed plans, or after
changes in the environment. User studies confirm that our framework helps users
perform teleoperation tasks more accurately than without haptic cues and, in
some cases, faster. Moreover, we demonstrate the use of our framework to help a
subject teleoperate a 7 DoF manipulator in a pick-and-place task.Comment: 19 pages, 9 figure
AR3n: A Reinforcement Learning-based Assist-As-Needed Controller for Robotic Rehabilitation
In this paper, we present AR3n (pronounced as Aaron), an assist-as-needed
(AAN) controller that utilizes reinforcement learning to supply adaptive
assistance during a robot assisted handwriting rehabilitation task. Unlike
previous AAN controllers, our method does not rely on patient specific
controller parameters or physical models. We propose the use of a virtual
patient model to generalize AR3n across multiple subjects. The system modulates
robotic assistance in realtime based on a subject's tracking error, while
minimizing the amount of robotic assistance. The controller is experimentally
validated through a set of simulations and human subject experiments. Finally,
a comparative study with a traditional rule-based controller is conducted to
analyze differences in assistance mechanisms of the two controllers.Comment: 8 pages, 9 figures, IEEE RA-
Autonomy Infused Teleoperation with Application to BCI Manipulation
Robot teleoperation systems face a common set of challenges including
latency, low-dimensional user commands, and asymmetric control inputs. User
control with Brain-Computer Interfaces (BCIs) exacerbates these problems
through especially noisy and erratic low-dimensional motion commands due to the
difficulty in decoding neural activity. We introduce a general framework to
address these challenges through a combination of computer vision, user intent
inference, and arbitration between the human input and autonomous control
schemes. Adjustable levels of assistance allow the system to balance the
operator's capabilities and feelings of comfort and control while compensating
for a task's difficulty. We present experimental results demonstrating
significant performance improvement using the shared-control assistance
framework on adapted rehabilitation benchmarks with two subjects implanted with
intracortical brain-computer interfaces controlling a seven degree-of-freedom
robotic manipulator as a prosthetic. Our results further indicate that shared
assistance mitigates perceived user difficulty and even enables successful
performance on previously infeasible tasks. We showcase the extensibility of
our architecture with applications to quality-of-life tasks such as opening a
door, pouring liquids from containers, and manipulation with novel objects in
densely cluttered environments
Context-aware learning for robot-assisted endovascular catheterization
Endovascular intervention has become a mainstream treatment of cardiovascular diseases. However, multiple challenges remain such as unwanted radiation exposures, limited two-dimensional image guidance, insufficient force perception and haptic cues. Fast evolving robot-assisted platforms improve the stability and accuracy of instrument manipulation. The master-slave system also removes radiation to the operator. However, the integration of robotic systems into the current surgical workflow is still debatable since repetitive, easy tasks have little value to be executed by the robotic teleoperation. Current systems offer very low autonomy, potential autonomous features could bring more benefits such as reduced cognitive workloads and human error, safer and more consistent instrument manipulation, ability to incorporate various medical imaging and sensing modalities. This research proposes frameworks for automated catheterisation with different machine learning-based algorithms, includes Learning-from-Demonstration, Reinforcement Learning, and Imitation Learning. Those frameworks focused on integrating context for tasks in the process of skill learning, hence achieving better adaptation to different situations and safer tool-tissue interactions. Furthermore, the autonomous feature was applied to next-generation, MR-safe robotic catheterisation platform. The results provide important insights into improving catheter navigation in the form of autonomous task planning, self-optimization with clinical relevant factors, and motivate the design of intelligent, intuitive, and collaborative robots under non-ionizing image modalities.Open Acces
- …