1,632 research outputs found

    Neural control for constrained human-robot interaction with human motion intention estimation and impedance learning

    Get PDF
    In this paper, an impedance control strategy is proposed for a rigid robot collaborating with human by considering impedance learning and human motion intention estimation. The least square method is used in human impedance identification, and the robot can adjust its impedance parameters according to human impedance model for guaranteeing compliant collaboration. Neural networks (NNs) are employed in human motion intention estimation, so that the robot follows the human actively and human partner costs less control effort. On the other hand, the full-state constraints are considered for operational safety in human-robot interactive processes. Neural control is presented in the control strategy to deal with the dynamic uncertainties and improve the system robustness. Simulation results are carried out to show the effectiveness of the proposed control design

    Trajectory Deformations from Physical Human-Robot Interaction

    Full text link
    Robots are finding new applications where physical interaction with a human is necessary: manufacturing, healthcare, and social tasks. Accordingly, the field of physical human-robot interaction (pHRI) has leveraged impedance control approaches, which support compliant interactions between human and robot. However, a limitation of traditional impedance control is that---despite provisions for the human to modify the robot's current trajectory---the human cannot affect the robot's future desired trajectory through pHRI. In this paper, we present an algorithm for physically interactive trajectory deformations which, when combined with impedance control, allows the human to modulate both the actual and desired trajectories of the robot. Unlike related works, our method explicitly deforms the future desired trajectory based on forces applied during pHRI, but does not require constant human guidance. We present our approach and verify that this method is compatible with traditional impedance control. Next, we use constrained optimization to derive the deformation shape. Finally, we describe an algorithm for real time implementation, and perform simulations to test the arbitration parameters. Experimental results demonstrate reduction in the human's effort and improvement in the movement quality when compared to pHRI with impedance control alone

    Human-robot co-carrying using visual and force sensing

    Get PDF
    In this paper, we propose a hybrid framework using visual and force sensing for human-robot co-carrying tasks. Visual sensing is utilized to obtain human motion and an observer is designed for estimating control input of human, which generates robot's desired motion towards human's intended motion. An adaptive impedance-based control strategy is proposed for trajectory tracking with neural networks (NNs) used to compensate for uncertainties in robot's dynamics. Motion synchronization is achieved and this approach yields a stable and efficient interaction behavior between human and robot, decreases human control effort and avoids interference to human during the interaction. The proposed framework is validated by a co-carrying task in simulations and experiments

    Bayesian estimation of human impedance and motion intention for human-robot collaboration

    Get PDF
    This article proposes a Bayesian method to acquire the estimation of human impedance and motion intention in a human-robot collaborative task. Combining with the prior knowledge of human stiffness, estimated stiffness obeying Gaussian distribution is obtained by Bayesian estimation, and human motion intention can be also estimated. An adaptive impedance control strategy is employed to track a target impedance model and neural networks are used to compensate for uncertainties in robotic dynamics. Comparative simulation results are carried out to verify the effectiveness of estimation method and emphasize the advantages of the proposed control strategy. The experiment, performed on Baxter robot platform, illustrates a good system performance

    Human-robot interaction for assistive robotics

    Get PDF
    This dissertation presents an in-depth study of human-robot interaction (HRI) withapplication to assistive robotics. In various studies, dexterous in-hand manipulation is included, assistive robots for Sit-To-stand (STS) assistance along with the human intention estimation. In Chapter 1, the background and issues of HRI are explicitly discussed. In Chapter 2, the literature review introduces the recent state-of-the-art research on HRI, such as physical Human-Robot Interaction (HRI), robot STS assistance, dexterous in hand manipulation and human intention estimation. In Chapter 3, various models and control algorithms are described in detail. Chapter 4 introduces the research equipment. Chapter 5 presents innovative theories and implementations of HRI in assistive robotics, including a general methodology of robotic assistance from the human perspective, novel hardware design, robotic sit-to-stand (STS) assistance, human intention estimation, and control

    Force-based Perception and Control Strategies for Human-Robot Shared Object Manipulation

    Get PDF
    Physical Human-Robot Interaction (PHRI) is essential for the future integration of robots in human-centered environments. In these settings, robots are expected to share the same workspace, interact physically, and collaborate with humans to achieve a common task. One of the primary tasks that require human-robot collaboration is object manipulation. The main challenges that need to be addressed to achieve a seamless cooperative object manipulation are related to uncertainties in human trajectory, grasp position, and intention. The object’s motion trajectory intended by the human is not always defined for the robot and the human may grasp any part of the object depending on the desired trajectory. In addition, the state-of-the-art object-manipulation control schemes suffer from the translation/rotation problem, where the human cannot move the object in all degrees of freedom, independently, and thus, needs to exert extra effort to accomplish the task. To address the challenges, first, we propose an estimation method for identifying the human grasp position. We extend the conventional contact point estimation method by formulating a new identification model with the human applied torque as an unknown parameter and employing empirical conditions to estimate the human grasp position. The proposed method is compared with a conventional contact point estimation using the experimental data collected for various collaboration scenarios. Second, given the human grasp position, a control strategy is suggested to transport the object in all degrees of freedom, independently. We employ the concept of “the instantaneous center of zero velocity” to reduce the human effort by minimizing the exerted human force. The stability of the interaction is evaluated using a passivity-based analysis of the closed-loop system, including the object and the robotic manipulator. The performance of the proposed control scheme is validated through simulation of scenarios containing rotations and translations of the object. Our study indicates that the exerted torque of the human has a significant effect on the human grasp position estimation. Besides, the knowledge of the human grasp position can be used in the control scheme design to avoid the translation/rotation problem and reduce the human effort

    Learning Dynamic Robot-to-Human Object Handover from Human Feedback

    Full text link
    Object handover is a basic, but essential capability for robots interacting with humans in many applications, e.g., caring for the elderly and assisting workers in manufacturing workshops. It appears deceptively simple, as humans perform object handover almost flawlessly. The success of humans, however, belies the complexity of object handover as collaborative physical interaction between two agents with limited communication. This paper presents a learning algorithm for dynamic object handover, for example, when a robot hands over water bottles to marathon runners passing by the water station. We formulate the problem as contextual policy search, in which the robot learns object handover by interacting with the human. A key challenge here is to learn the latent reward of the handover task under noisy human feedback. Preliminary experiments show that the robot learns to hand over a water bottle naturally and that it adapts to the dynamics of human motion. One challenge for the future is to combine the model-free learning algorithm with a model-based planning approach and enable the robot to adapt over human preferences and object characteristics, such as shape, weight, and surface texture.Comment: Appears in the Proceedings of the International Symposium on Robotics Research (ISRR) 201
    corecore