707 research outputs found
Deep Haptic Model Predictive Control for Robot-Assisted Dressing
Robot-assisted dressing offers an opportunity to benefit the lives of many
people with disabilities, such as some older adults. However, robots currently
lack common sense about the physical implications of their actions on people.
The physical implications of dressing are complicated by non-rigid garments,
which can result in a robot indirectly applying high forces to a person's body.
We present a deep recurrent model that, when given a proposed action by the
robot, predicts the forces a garment will apply to a person's body. We also
show that a robot can provide better dressing assistance by using this model
with model predictive control. The predictions made by our model only use
haptic and kinematic observations from the robot's end effector, which are
readily attainable. Collecting training data from real world physical
human-robot interaction can be time consuming, costly, and put people at risk.
Instead, we train our predictive model using data collected in an entirely
self-supervised fashion from a physics-based simulation. We evaluated our
approach with a PR2 robot that attempted to pull a hospital gown onto the arms
of 10 human participants. With a 0.2s prediction horizon, our controller
succeeded at high rates and lowered applied force while navigating the garment
around a persons fist and elbow without getting caught. Shorter prediction
horizons resulted in significantly reduced performance with the sleeve catching
on the participants' fists and elbows, demonstrating the value of our model's
predictions. These behaviors of mitigating catches emerged from our deep
predictive model and the controller objective function, which primarily
penalizes high forces.Comment: 8 pages, 12 figures, 1 table, 2018 IEEE International Conference on
Robotics and Automation (ICRA
Active Inference for Integrated State-Estimation, Control, and Learning
This work presents an approach for control, state-estimation and learning
model (hyper)parameters for robotic manipulators. It is based on the active
inference framework, prominent in computational neuroscience as a theory of the
brain, where behaviour arises from minimizing variational free-energy. The
robotic manipulator shows adaptive and robust behaviour compared to
state-of-the-art methods. Additionally, we show the exact relationship to
classic methods such as PID control. Finally, we show that by learning a
temporal parameter and model variances, our approach can deal with unmodelled
dynamics, damps oscillations, and is robust against disturbances and poor
initial parameters. The approach is validated on the `Franka Emika Panda' 7 DoF
manipulator.Comment: 7 pages, 6 figures, accepted for presentation at the International
Conference on Robotics and Automation (ICRA) 202
Trajectory Optimization Through Contacts and Automatic Gait Discovery for Quadrupeds
In this work we present a trajectory Optimization framework for whole-body
motion planning through contacts. We demonstrate how the proposed approach can
be applied to automatically discover different gaits and dynamic motions on a
quadruped robot. In contrast to most previous methods, we do not pre-specify
contact switches, timings, points or gait patterns, but they are a direct
outcome of the optimization. Furthermore, we optimize over the entire dynamics
of the robot, which enables the optimizer to fully leverage the capabilities of
the robot. To illustrate the spectrum of achievable motions, here we show eight
different tasks, which would require very different control structures when
solved with state-of-the-art methods. Using our trajectory Optimization
approach, we are solving each task with a simple, high level cost function and
without any changes in the control structure. Furthermore, we fully integrated
our approach with the robot's control and estimation framework such that
optimization can be run online. By demonstrating a rough manipulation task with
multiple dynamic contact switches, we exemplarily show how optimized
trajectories and control inputs can be directly applied to hardware.Comment: Video: https://youtu.be/sILuqJBsyK
(A) Vision for 2050 - Context-Based Image Understanding for a Human-Robot Soccer Match
We believe it is possible to create the visual subsystem needed for the RoboCup 2050 challenge - a soccer match between humans and robots - within the next decade.  In this position paper, we argue, that the basic techniques are available, but the main challenge will be to achieve the necessary robustness. We propose to address this challenge through the use of probabilistically modeled context, so for instance a visually indistinct circle is  accepted as the ball, if it fits well with the ball's motion model and vice versa.Our vision is accompanied by a sequence of (partially already conducted) experiments for its verification.  In these experiments, a human soccer player carries a helmet with a camera and an inertial sensor and the vision system has to extract all information from that data, a humanoid robot would need to take the human's place
- …