2,031 research outputs found
Goal Set Inverse Optimal Control and Iterative Re-planning for Predicting Human Reaching Motions in Shared Workspaces
To enable safe and efficient human-robot collaboration in shared workspaces
it is important for the robot to predict how a human will move when performing
a task. While predicting human motion for tasks not known a priori is very
challenging, we argue that single-arm reaching motions for known tasks in
collaborative settings (which are especially relevant for manufacturing) are
indeed predictable. Two hypotheses underlie our approach for predicting such
motions: First, that the trajectory the human performs is optimal with respect
to an unknown cost function, and second, that human adaptation to their
partner's motion can be captured well through iterative re-planning with the
above cost function. The key to our approach is thus to learn a cost function
which "explains" the motion of the human. To do this, we gather example
trajectories from pairs of participants performing a collaborative assembly
task using motion capture. We then use Inverse Optimal Control to learn a cost
function from these trajectories. Finally, we predict reaching motions from the
human's current configuration to a task-space goal region by iteratively
re-planning a trajectory using the learned cost function. Our planning
algorithm is based on the trajectory optimizer STOMP, it plans for a 23 DoF
human kinematic model and accounts for the presence of a moving collaborator
and obstacles in the environment. Our results suggest that in most cases, our
method outperforms baseline methods when predicting motions. We also show that
our method outperforms baselines for predicting human motion when a human and a
robot share the workspace.Comment: 12 pages, Accepted for publication IEEE Transaction on Robotics 201
Shared control of human and robot by approximate dynamic programming
This paper aims at proposing a general framework of human-robot shared control for a natural and effective interface. A typical human-robot collaboration scenario is investigated, and a framework of shared control is developed based on finding the solution to an optimization problem. Human dynamics are taken into account in the analysis of the coupled human-robot system, and objectives of both human and robot are considered. Approximate dynamic programming is employed to solve the optimization problem in the presence of unknown human and robot dynamics. The validity of the proposed method is verified through simulation studies
Recommended from our members
Reinforcement learning for human-robot shared control
This paper aims at proposing a general framework of shared control for human-robot interaction. Human dynamics are considered in analysis of the coupled human-robot system. Motion intentions of both human and robot are taken into account in the control objective of the robot. Reinforcement learning is developed to achieve the control objective subject to unknown dynamics of human and robot. The closed-loop system performance is discussed through a rigorous proof. Simulations are conducted to demonstrate the learning capability of the proposed method and its feasibility in handling various situations. Compared to existing works, the proposed framework combines motion intentions of both human and robot in a human-robot shared control system, without the requirement of the knowledge of humans and robots dynamics
Human–Robot Role Arbitration via Differential Game Theory
The industry needs controllers that allow smooth and natural physical Human-Robot Interaction (pHRI) to make production scenarios more flexible and user-friendly. Within this context, particularly interesting is Role Arbitration, which is the mechanism that assigns the role of the leader to either the human or the robot. This paper investigates Game-Theory (GT) to model pHRI, and specifically, Cooperative Game Theory (CGT) and Non-Cooperative Game Theory (NCGT) are considered. This work proposes a possible solution to the Role Arbitration problem and defines a Role Arbitration framework based on differential game theory to allow pHRI. The proposed method can allow trajectory deformation according to human will, avoiding reaching dangerous situations such as collisions with environmental features, robot joints and workspace limits, and possibly safety constraints. Three sets of experiments are proposed to evaluate different situations and compared with two other standard methods for pHRI, the Impedance Control, and the Manual Guidance. Experiments show that with our Role Arbitration method, different situations can be handled safely and smoothly with a low human effort. In particular, the performances of the IMP and MG vary according to the task. In some cases, MG performs well, and IMP does not. In some others, IMP performs excellently, and MG does not. The proposed Role Arbitration controller performs well in all the cases, showing its superiority and generality. The proposed method generally requires less force and ensures better accuracy in performing all tasks than standard controllers. Note to Practitioners—This work presents a method that allows role arbitration for physical Human-Robot Interaction, motivated by the need to adjust the role of leader/follower in a shared task according to the specific phase of the task or the knowledge of one of the two agents. This method suits applications such as object co-transportation, which requires final precise positioning but allows some trajectory deformation on the fly. It can also handle situations where the carried obstacle occludes human sight, and the robot helps the human to avoid possible environmental obstacles and position the objects at the target pose precisely. Currently, this method does not consider external contact, which is likely to arise in many situations. Future studies will investigate the modeling and detection of external contacts to include them in the interaction models this work addresses
Learning Task Constraints from Demonstration for Hybrid Force/Position Control
We present a novel method for learning hybrid force/position control from
demonstration. We learn a dynamic constraint frame aligned to the direction of
desired force using Cartesian Dynamic Movement Primitives. In contrast to
approaches that utilize a fixed constraint frame, our approach easily
accommodates tasks with rapidly changing task constraints over time. We
activate only one degree of freedom for force control at any given time,
ensuring motion is always possible orthogonal to the direction of desired
force. Since we utilize demonstrated forces to learn the constraint frame, we
are able to compensate for forces not detected by methods that learn only from
the demonstrated kinematic motion, such as frictional forces between the
end-effector and the contact surface. We additionally propose novel extensions
to the Dynamic Movement Primitive (DMP) framework that encourage robust
transition from free-space motion to in-contact motion in spite of environment
uncertainty. We incorporate force feedback and a dynamically shifting goal to
reduce forces applied to the environment and retain stable contact while
enabling force control. Our methods exhibit low impact forces on contact and
low steady-state tracking error.Comment: Under revie
- …