48 research outputs found
Feedback error learning at the muscle level: A unified model of human motor adaptation to stable and unstable dynamics
Master'sMASTER OF ENGINEERIN
Recommended from our members
Reinforcement learning control for coordinated manipulation of multi-robots
In this paper, coordination control is investigated for multi-robots to manipulate an object with a common desired trajectory. Both trajectory tracking and control input minimization are considered for each individual robot manipulator, such that possible disagreement between different manipulators can be handled. Reinforcement learning is employed to cope with the problem of unknown dynamics of both robots and the manipulated object. It is rigorously proven that the proposed method guarantees the coordination control of the multi-robots system under study. The validity of the proposed method is verified through simulation studies
A framework of human–robot coordination based on game theory and policy iteration
In this paper, we propose a framework to analyze the interactive behaviors of human and robot in physical interactions. Game theory is employed to describe the system under study, and policy iteration is adopted to provide a solution of Nash equilibrium. The human’s control objective is estimated based on the measured interaction force, and it is used to adapt the robot’s objective such that human-robot coordination can be achieved. The validity of the proposed method is verified through a rigorous proof and experimental studies
Continuous critic learning for robot control in physical human-robot interaction
In this paper, optimal impedance adaptation is investigated for interaction control in constrained motion. The external environment is modeled as a linear system with parameter matrices completely unknown and continuous critic learning is adopted for interaction control. The desired impedance is obtained which leads to an optimal realization of the trajectory tracking and force regulation. As no particular system information is required in the whole process, the proposed interaction control provides a feasible solution to a large number of applications. The validity of the proposed method is verified through simulation studies
Continuous role adaptation for human-robot shared control
In this paper, we propose a role adaptation method for human-robot shared control. Game theory is employed for fundamental analysis of this two-agent system. An adaptation law is developed such that the robot is able to adjust its own role according to the human’s intention to lead or follow, which is inferred through the measured interaction force. In the absence of human interaction forces, the adaptive scheme allows the robot to take the lead and complete the task by itself. On the other hand, when the human persistently exerts strong forces that signal an unambiguous intent to lead, the robot yields and becomes the follower. Additionally, the full spectrum of mixed roles between these extreme scenarios is afforded by continuous online update of the control that is shared between both agents. Theoretical analysis shows that the resulting shared control is optimal with respect to a two-agent coordination game. Experimental results illustrate better overall performance, in terms of both error and effort, compared to fixed-role interactions
Role adaptation of human and robot in collaborative tasks
In this paper, a role adaptation method is developed for human-robot collaboration based on game theory. This role adaptation is engaged whenever the interaction force changes, causing the proportion of control sharing between human and robot to vary. In one boundary condition, the robot takes full control of the system when there is no human intervention. In the other boundary condition, it becomes a follower when the human exhibits strong intention to lead the task. Experimental results show that the proposed method yields better overall performance than fixed-role interactions
Adaptive optimal control for coordination in physical human-robot interaction
In this paper, a role adaptation method is developed for human-robot collaboration based on game theory. This role adaptation is engaged whenever the interaction force changes, causing the proportion of control sharing between human and robot to vary. In one boundary condition, the robot takes full control of the system when there is no human intervention. In the other boundary condition, it becomes a follower when the human exhibits strong intention to lead the task. Experimental results show that the proposed method yields better overall performance than fixed-role interactions