15,096 research outputs found

    Learning Task Constraints from Demonstration for Hybrid Force/Position Control

    Full text link
    We present a novel method for learning hybrid force/position control from demonstration. We learn a dynamic constraint frame aligned to the direction of desired force using Cartesian Dynamic Movement Primitives. In contrast to approaches that utilize a fixed constraint frame, our approach easily accommodates tasks with rapidly changing task constraints over time. We activate only one degree of freedom for force control at any given time, ensuring motion is always possible orthogonal to the direction of desired force. Since we utilize demonstrated forces to learn the constraint frame, we are able to compensate for forces not detected by methods that learn only from the demonstrated kinematic motion, such as frictional forces between the end-effector and the contact surface. We additionally propose novel extensions to the Dynamic Movement Primitive (DMP) framework that encourage robust transition from free-space motion to in-contact motion in spite of environment uncertainty. We incorporate force feedback and a dynamically shifting goal to reduce forces applied to the environment and retain stable contact while enabling force control. Our methods exhibit low impact forces on contact and low steady-state tracking error.Comment: Under revie

    Gaussian-Process-based Robot Learning from Demonstration

    Full text link
    Endowed with higher levels of autonomy, robots are required to perform increasingly complex manipulation tasks. Learning from demonstration is arising as a promising paradigm for transferring skills to robots. It allows to implicitly learn task constraints from observing the motion executed by a human teacher, which can enable adaptive behavior. We present a novel Gaussian-Process-based learning from demonstration approach. This probabilistic representation allows to generalize over multiple demonstrations, and encode variability along the different phases of the task. In this paper, we address how Gaussian Processes can be used to effectively learn a policy from trajectories in task space. We also present a method to efficiently adapt the policy to fulfill new requirements, and to modulate the robot behavior as a function of task variability. This approach is illustrated through a real-world application using the TIAGo robot.Comment: 8 pages, 10 figure

    Simultaneously encoding movement and sEMG-based stiffness for robotic skill learning

    Get PDF
    Transferring human stiffness regulation strategies to robots enables them to effectively and efficiently acquire adaptive impedance control policies to deal with uncertainties during the accomplishment of physical contact tasks in an unstructured environment. In this work, we develop such a physical human-robot interaction (pHRI) system which allows robots to learn variable impedance skills from human demonstrations. Specifically, the biological signals, i.e., surface electromyography (sEMG) are utilized for the extraction of human arm stiffness features during the task demonstration. The estimated human arm stiffness is then mapped into a robot impedance controller. The dynamics of both movement and stiffness are simultaneously modeled by using a model combining the hidden semi-Markov model (HSMM) and the Gaussian mixture regression (GMR). More importantly, the correlation between the movement information and the stiffness information is encoded in a systematic manner. This approach enables capturing uncertainties over time and space and allows the robot to satisfy both position and stiffness requirements in a task with modulation of the impedance controller. The experimental study validated the proposed approach
    corecore