2,120 research outputs found
Recommended from our members
Impedance learning for robots interacting with unknown environments
In this paper, impedance learning is investigated for robots interacting with unknown environments. A twoloop control framework is employed and adaptive control is developed for the inner-loop position control. The environments are described as time-varying systems with unknown parameters in the state-space form. The gradient-following scheme and betterment scheme are employed to obtain a desired impedance model, subject to unknown environments. The desired interaction performance is achieved in the sense that a defined cost function is minimized. Simulation and experiment studies are carried out to verify the validity of the proposed method
Impedance adaptation for optimal robot–environment interaction
In this paper, impedance adaptation is investigated for robots interacting with unknown environments. Impedance control is employed for the physical interaction between robots and environments, subject to unknown and uncertain environments dynamics. The unknown environments are described as linear systems with unknown dynamics, based on which the desired impedance model is obtained. A cost function that measures the tracking error and interaction force is defined, and the critical impedance parameters are found to minimize it. Without requiring the information of the environments dynamics, the proposed impedance adaptation is feasible in a large number of applications where robots physically interact with unknown environments. The validity of the proposed method is verified through simulation studies
Recommended from our members
Reinforcement learning for human-robot shared control
This paper aims at proposing a general framework of shared control for human-robot interaction. Human dynamics are considered in analysis of the coupled human-robot system. Motion intentions of both human and robot are taken into account in the control objective of the robot. Reinforcement learning is developed to achieve the control objective subject to unknown dynamics of human and robot. The closed-loop system performance is discussed through a rigorous proof. Simulations are conducted to demonstrate the learning capability of the proposed method and its feasibility in handling various situations. Compared to existing works, the proposed framework combines motion intentions of both human and robot in a human-robot shared control system, without the requirement of the knowledge of humans and robots dynamics
A survey of robot manipulation in contact
In this survey, we present the current status on robots performing manipulation tasks that require varying contact with the environment, such that the robot must either implicitly or explicitly control the contact force with the environment to complete the task. Robots can perform more and more manipulation tasks that are still done by humans, and there is a growing number of publications on the topics of (1) performing tasks that always require contact and (2) mitigating uncertainty by leveraging the environment in tasks that, under perfect information, could be performed without contact. The recent trends have seen robots perform tasks earlier left for humans, such as massage, and in the classical tasks, such as peg-in-hole, there is a more efficient generalization to other similar tasks, better error tolerance, and faster planning or learning of the tasks. Thus, in this survey we cover the current stage of robots performing such tasks, starting from surveying all the different in-contact tasks robots can perform, observing how these tasks are controlled and represented, and finally presenting the learning and planning of the skills required to complete these tasks
Learning Dynamic Robot-to-Human Object Handover from Human Feedback
Object handover is a basic, but essential capability for robots interacting
with humans in many applications, e.g., caring for the elderly and assisting
workers in manufacturing workshops. It appears deceptively simple, as humans
perform object handover almost flawlessly. The success of humans, however,
belies the complexity of object handover as collaborative physical interaction
between two agents with limited communication. This paper presents a learning
algorithm for dynamic object handover, for example, when a robot hands over
water bottles to marathon runners passing by the water station. We formulate
the problem as contextual policy search, in which the robot learns object
handover by interacting with the human. A key challenge here is to learn the
latent reward of the handover task under noisy human feedback. Preliminary
experiments show that the robot learns to hand over a water bottle naturally
and that it adapts to the dynamics of human motion. One challenge for the
future is to combine the model-free learning algorithm with a model-based
planning approach and enable the robot to adapt over human preferences and
object characteristics, such as shape, weight, and surface texture.Comment: Appears in the Proceedings of the International Symposium on Robotics
Research (ISRR) 201
Shared control of human and robot by approximate dynamic programming
This paper aims at proposing a general framework of human-robot shared control for a natural and effective interface. A typical human-robot collaboration scenario is investigated, and a framework of shared control is developed based on finding the solution to an optimization problem. Human dynamics are taken into account in the analysis of the coupled human-robot system, and objectives of both human and robot are considered. Approximate dynamic programming is employed to solve the optimization problem in the presence of unknown human and robot dynamics. The validity of the proposed method is verified through simulation studies
A framework of human–robot coordination based on game theory and policy iteration
In this paper, we propose a framework to analyze the interactive behaviors of human and robot in physical interactions. Game theory is employed to describe the system under study, and policy iteration is adopted to provide a solution of Nash equilibrium. The human’s control objective is estimated based on the measured interaction force, and it is used to adapt the robot’s objective such that human-robot coordination can be achieved. The validity of the proposed method is verified through a rigorous proof and experimental studies
Language-Grounded Control for Coordinated Robot Motion and Speech
Recent advancements have enabled human-robot collaboration through physical
assistance and verbal guidance. However, limitations persist in coordinating
robots' physical motions and speech in response to real-time changes in human
behavior during collaborative contact tasks. We first derive principles from
analyzing physical therapists' movements and speech during patient exercises.
These principles are translated into control objectives to: 1) guide users
through trajectories, 2) control motion and speech pace to align completion
times with varying user cooperation, and 3) dynamically paraphrase speech along
the trajectory. We then propose a Language Controller that synchronizes motion
and speech, modulating both based on user cooperation. Experiments with 12
users show the Language Controller successfully aligns motion and speech
compared to baselines. This provides a framework for fluent human-robot
collaboration.Comment: Under review in ICRA 202
- …