4,162 research outputs found

    Trajectory Deformations from Physical Human-Robot Interaction

    Full text link
    Robots are finding new applications where physical interaction with a human is necessary: manufacturing, healthcare, and social tasks. Accordingly, the field of physical human-robot interaction (pHRI) has leveraged impedance control approaches, which support compliant interactions between human and robot. However, a limitation of traditional impedance control is that---despite provisions for the human to modify the robot's current trajectory---the human cannot affect the robot's future desired trajectory through pHRI. In this paper, we present an algorithm for physically interactive trajectory deformations which, when combined with impedance control, allows the human to modulate both the actual and desired trajectories of the robot. Unlike related works, our method explicitly deforms the future desired trajectory based on forces applied during pHRI, but does not require constant human guidance. We present our approach and verify that this method is compatible with traditional impedance control. Next, we use constrained optimization to derive the deformation shape. Finally, we describe an algorithm for real time implementation, and perform simulations to test the arbitration parameters. Experimental results demonstrate reduction in the human's effort and improvement in the movement quality when compared to pHRI with impedance control alone

    Deep Model Predictive Variable Impedance Control

    Full text link
    The capability to adapt compliance by varying muscle stiffness is crucial for dexterous manipulation skills in humans. Incorporating compliance in robot motor control is crucial to performing real-world force interaction tasks with human-level dexterity. This work presents a Deep Model Predictive Variable Impedance Controller for compliant robotic manipulation which combines Variable Impedance Control with Model Predictive Control (MPC). A generalized Cartesian impedance model of a robot manipulator is learned using an exploration strategy maximizing the information gain. This model is used within an MPC framework to adapt the impedance parameters of a low-level variable impedance controller to achieve the desired compliance behavior for different manipulation tasks without any retraining or finetuning. The deep Model Predictive Variable Impedance Control approach is evaluated using a Franka Emika Panda robotic manipulator operating on different manipulation tasks in simulations and real experiments. The proposed approach was compared with model-free and model-based reinforcement approaches in variable impedance control for transferability between tasks and performance.Comment: Preprint submitted to the journal of robotics and autonomous system

    A survey of robot manipulation in contact

    Get PDF
    In this survey, we present the current status on robots performing manipulation tasks that require varying contact with the environment, such that the robot must either implicitly or explicitly control the contact force with the environment to complete the task. Robots can perform more and more manipulation tasks that are still done by humans, and there is a growing number of publications on the topics of (1) performing tasks that always require contact and (2) mitigating uncertainty by leveraging the environment in tasks that, under perfect information, could be performed without contact. The recent trends have seen robots perform tasks earlier left for humans, such as massage, and in the classical tasks, such as peg-in-hole, there is a more efficient generalization to other similar tasks, better error tolerance, and faster planning or learning of the tasks. Thus, in this survey we cover the current stage of robots performing such tasks, starting from surveying all the different in-contact tasks robots can perform, observing how these tasks are controlled and represented, and finally presenting the learning and planning of the skills required to complete these tasks

    Human Like Adaptation of Force and Impedance in Stable and Unstable Tasks

    Get PDF
    Abstract—This paper presents a novel human-like learning con-troller to interact with unknown environments. Strictly derived from the minimization of instability, motion error, and effort, the controller compensates for the disturbance in the environment in interaction tasks by adapting feedforward force and impedance. In contrast with conventional learning controllers, the new controller can deal with unstable situations that are typical of tool use and gradually acquire a desired stability margin. Simulations show that this controller is a good model of human motor adaptation. Robotic implementations further demonstrate its capabilities to optimally adapt interaction with dynamic environments and humans in joint torque controlled robots and variable impedance actuators, with-out requiring interaction force sensing. Index Terms—Feedforward force, human motor control, impedance, robotic control. I
    corecore