2,120 research outputs found

    Impedance adaptation for optimal robot–environment interaction

    Get PDF
    In this paper, impedance adaptation is investigated for robots interacting with unknown environments. Impedance control is employed for the physical interaction between robots and environments, subject to unknown and uncertain environments dynamics. The unknown environments are described as linear systems with unknown dynamics, based on which the desired impedance model is obtained. A cost function that measures the tracking error and interaction force is defined, and the critical impedance parameters are found to minimize it. Without requiring the information of the environments dynamics, the proposed impedance adaptation is feasible in a large number of applications where robots physically interact with unknown environments. The validity of the proposed method is verified through simulation studies

    A survey of robot manipulation in contact

    Get PDF
    In this survey, we present the current status on robots performing manipulation tasks that require varying contact with the environment, such that the robot must either implicitly or explicitly control the contact force with the environment to complete the task. Robots can perform more and more manipulation tasks that are still done by humans, and there is a growing number of publications on the topics of (1) performing tasks that always require contact and (2) mitigating uncertainty by leveraging the environment in tasks that, under perfect information, could be performed without contact. The recent trends have seen robots perform tasks earlier left for humans, such as massage, and in the classical tasks, such as peg-in-hole, there is a more efficient generalization to other similar tasks, better error tolerance, and faster planning or learning of the tasks. Thus, in this survey we cover the current stage of robots performing such tasks, starting from surveying all the different in-contact tasks robots can perform, observing how these tasks are controlled and represented, and finally presenting the learning and planning of the skills required to complete these tasks

    Learning Dynamic Robot-to-Human Object Handover from Human Feedback

    Full text link
    Object handover is a basic, but essential capability for robots interacting with humans in many applications, e.g., caring for the elderly and assisting workers in manufacturing workshops. It appears deceptively simple, as humans perform object handover almost flawlessly. The success of humans, however, belies the complexity of object handover as collaborative physical interaction between two agents with limited communication. This paper presents a learning algorithm for dynamic object handover, for example, when a robot hands over water bottles to marathon runners passing by the water station. We formulate the problem as contextual policy search, in which the robot learns object handover by interacting with the human. A key challenge here is to learn the latent reward of the handover task under noisy human feedback. Preliminary experiments show that the robot learns to hand over a water bottle naturally and that it adapts to the dynamics of human motion. One challenge for the future is to combine the model-free learning algorithm with a model-based planning approach and enable the robot to adapt over human preferences and object characteristics, such as shape, weight, and surface texture.Comment: Appears in the Proceedings of the International Symposium on Robotics Research (ISRR) 201

    Shared control of human and robot by approximate dynamic programming

    Get PDF
    This paper aims at proposing a general framework of human-robot shared control for a natural and effective interface. A typical human-robot collaboration scenario is investigated, and a framework of shared control is developed based on finding the solution to an optimization problem. Human dynamics are taken into account in the analysis of the coupled human-robot system, and objectives of both human and robot are considered. Approximate dynamic programming is employed to solve the optimization problem in the presence of unknown human and robot dynamics. The validity of the proposed method is verified through simulation studies

    A framework of human–robot coordination based on game theory and policy iteration

    Get PDF
    In this paper, we propose a framework to analyze the interactive behaviors of human and robot in physical interactions. Game theory is employed to describe the system under study, and policy iteration is adopted to provide a solution of Nash equilibrium. The human’s control objective is estimated based on the measured interaction force, and it is used to adapt the robot’s objective such that human-robot coordination can be achieved. The validity of the proposed method is verified through a rigorous proof and experimental studies

    Language-Grounded Control for Coordinated Robot Motion and Speech

    Full text link
    Recent advancements have enabled human-robot collaboration through physical assistance and verbal guidance. However, limitations persist in coordinating robots' physical motions and speech in response to real-time changes in human behavior during collaborative contact tasks. We first derive principles from analyzing physical therapists' movements and speech during patient exercises. These principles are translated into control objectives to: 1) guide users through trajectories, 2) control motion and speech pace to align completion times with varying user cooperation, and 3) dynamically paraphrase speech along the trajectory. We then propose a Language Controller that synchronizes motion and speech, modulating both based on user cooperation. Experiments with 12 users show the Language Controller successfully aligns motion and speech compared to baselines. This provides a framework for fluent human-robot collaboration.Comment: Under review in ICRA 202
    corecore