7,971 research outputs found
Learning Task Constraints from Demonstration for Hybrid Force/Position Control
We present a novel method for learning hybrid force/position control from
demonstration. We learn a dynamic constraint frame aligned to the direction of
desired force using Cartesian Dynamic Movement Primitives. In contrast to
approaches that utilize a fixed constraint frame, our approach easily
accommodates tasks with rapidly changing task constraints over time. We
activate only one degree of freedom for force control at any given time,
ensuring motion is always possible orthogonal to the direction of desired
force. Since we utilize demonstrated forces to learn the constraint frame, we
are able to compensate for forces not detected by methods that learn only from
the demonstrated kinematic motion, such as frictional forces between the
end-effector and the contact surface. We additionally propose novel extensions
to the Dynamic Movement Primitive (DMP) framework that encourage robust
transition from free-space motion to in-contact motion in spite of environment
uncertainty. We incorporate force feedback and a dynamically shifting goal to
reduce forces applied to the environment and retain stable contact while
enabling force control. Our methods exhibit low impact forces on contact and
low steady-state tracking error.Comment: Under revie
Recommended from our members
Simultaneously encoding movement and sEMG-based stiffness for robotic skill learning
Transferring human stiffness regulation strategies to robots enables them to effectively and efficiently acquire adaptive impedance control policies to deal with uncertainties during the accomplishment of physical contact tasks in an unstructured environment. In this work, we develop such a physical human-robot interaction (pHRI) system which allows robots to learn variable impedance skills from human demonstrations. Specifically, the biological signals, i.e., surface electromyography (sEMG) are utilized for the extraction of human arm stiffness features during the task demonstration. The estimated human arm stiffness is then mapped into a robot impedance controller. The dynamics of both movement and stiffness are simultaneously modeled by using a model combining the hidden semi-Markov model (HSMM) and the Gaussian mixture regression (GMR). More importantly, the correlation between the movement information and the stiffness information is encoded in a systematic manner. This approach enables capturing uncertainties over time and space and allows the robot to satisfy both position and stiffness requirements in a task with modulation of the impedance controller. The experimental study validated the proposed approach
Learning Dynamic Robot-to-Human Object Handover from Human Feedback
Object handover is a basic, but essential capability for robots interacting
with humans in many applications, e.g., caring for the elderly and assisting
workers in manufacturing workshops. It appears deceptively simple, as humans
perform object handover almost flawlessly. The success of humans, however,
belies the complexity of object handover as collaborative physical interaction
between two agents with limited communication. This paper presents a learning
algorithm for dynamic object handover, for example, when a robot hands over
water bottles to marathon runners passing by the water station. We formulate
the problem as contextual policy search, in which the robot learns object
handover by interacting with the human. A key challenge here is to learn the
latent reward of the handover task under noisy human feedback. Preliminary
experiments show that the robot learns to hand over a water bottle naturally
and that it adapts to the dynamics of human motion. One challenge for the
future is to combine the model-free learning algorithm with a model-based
planning approach and enable the robot to adapt over human preferences and
object characteristics, such as shape, weight, and surface texture.Comment: Appears in the Proceedings of the International Symposium on Robotics
Research (ISRR) 201
- …