1,299 research outputs found
Learning Dynamic Robot-to-Human Object Handover from Human Feedback
Object handover is a basic, but essential capability for robots interacting
with humans in many applications, e.g., caring for the elderly and assisting
workers in manufacturing workshops. It appears deceptively simple, as humans
perform object handover almost flawlessly. The success of humans, however,
belies the complexity of object handover as collaborative physical interaction
between two agents with limited communication. This paper presents a learning
algorithm for dynamic object handover, for example, when a robot hands over
water bottles to marathon runners passing by the water station. We formulate
the problem as contextual policy search, in which the robot learns object
handover by interacting with the human. A key challenge here is to learn the
latent reward of the handover task under noisy human feedback. Preliminary
experiments show that the robot learns to hand over a water bottle naturally
and that it adapts to the dynamics of human motion. One challenge for the
future is to combine the model-free learning algorithm with a model-based
planning approach and enable the robot to adapt over human preferences and
object characteristics, such as shape, weight, and surface texture.Comment: Appears in the Proceedings of the International Symposium on Robotics
Research (ISRR) 201
Object Handovers: a Review for Robotics
This article surveys the literature on human-robot object handovers. A
handover is a collaborative joint action where an agent, the giver, gives an
object to another agent, the receiver. The physical exchange starts when the
receiver first contacts the object held by the giver and ends when the giver
fully releases the object to the receiver. However, important cognitive and
physical processes begin before the physical exchange, including initiating
implicit agreement with respect to the location and timing of the exchange.
From this perspective, we structure our review into the two main phases
delimited by the aforementioned events: 1) a pre-handover phase, and 2) the
physical exchange. We focus our analysis on the two actors (giver and receiver)
and report the state of the art of robotic givers (robot-to-human handovers)
and the robotic receivers (human-to-robot handovers). We report a comprehensive
list of qualitative and quantitative metrics commonly used to assess the
interaction. While focusing our review on the cognitive level (e.g.,
prediction, perception, motion planning, learning) and the physical level
(e.g., motion, grasping, grip release) of the handover, we briefly discuss also
the concepts of safety, social context, and ergonomics. We compare the
behaviours displayed during human-to-human handovers to the state of the art of
robotic assistants, and identify the major areas of improvement for robotic
assistants to reach performance comparable to human interactions. Finally, we
propose a minimal set of metrics that should be used in order to enable a fair
comparison among the approaches.Comment: Review paper, 19 page
Object Transfer Point Estimation for Prompt Human to Robot Handovers
Handing over objects is the foundation of many human-robot interaction and collaboration tasks. In the scenario where a human is handing over an object to a robot, the human chooses where the object needs to be transferred. The robot needs to accurately predict this point of transfer to reach out proactively, instead of waiting for the final position to be presented. We first conduct a human-to-robot handover motion study to analyze the effect of user height, arm length, position, orientation and robot gaze on the object transfer point. Our study presents new observations on the effect of robot\u27s gaze on the point of object transfer. Next, we present an efficient method for predicting the Object Transfer Point (OTP), which synthesizes (1) an offline OTP calculated based on human preferences observed in the human-robot motion study with (2) a dynamic OTP predicted based on the observed human motion. Our proposed OTP predictor is implemented on a humanoid nursing robot and experimentally validated in human-robot handover tasks. Compared to using only static or dynamic OTP estimators, it has better accuracy at the earlier phase of handover (up to 45% of the handover motion) and can render fluent handovers with a reach-to-grasp response time (about 3.1 secs) close to natural human receiver\u27s response. In addition, the OTP prediction accuracy is maintained across the robot\u27s visible workspace by utilizing a user-adaptive reference frame
Adaptive timing in a dynamic field architecture for natural human–robot interactions
A close temporal coordination of actions and goals is crucial for natural and fluent human–robot interactions in collaborative tasks. How to endow an autonomous robot with a basic temporal cognition capacity is an open question. In this paper, we present a neurodynamics approach based on the theoretical framework of dynamic neural fields (DNF) which assumes that timing processes are closely integrated with other cognitive computations. The continuous evolution of neural population activity towards an attractor state provides an implicit sensation of the passage of time. Highly flexible sensorimotor timing can be achieved through manipulations of inputs or initial conditions that affect the speed with which the neural trajectory evolves. We test a DNF-based control architecture in an assembly paradigm in which an assistant hands over a series of pieces which the operator uses among others in the assembly process. By watching two experts, the robot first learns the serial order and relative timing of object transfers to subsequently substitute the assistant in the collaborative task. A dynamic adaptation rule exploiting a perceived temporal mismatch between the expected and the realized transfer timing allows the robot to quickly adapt its proactive motor timing to the pace of the operator even when an additional assembly step delays a handover. Moreover, the self-stabilizing properties of the population dynamics support the fast internal simulation of acquired task knowledge allowing the robot to anticipate serial order errorsThis work is financed by national funds through FCT – Fundação para a Ciência e a Tecnologia, I.P., within the scope of the projects ‘‘NEUROFIELD’’ (Ref PTDC/MAT-APL/31393/2017), ‘‘I-CATER – Intelligent Robotic Coworker Assistant for Industrial Tasks with an Ergonomics Rationale’’ (Ref PTDC/EEI-ROB/3488/2021) and R&D Units Project Scope: UIDB/00319/2020 – ALGORITMI Research Centre
Human-robot interaction using a behavioural control strategy
PhD ThesisA topical and important aspect of robotics research is in the area of human-robot interaction (HRI), which addresses the issue of cooperation between a human and a robot to allow tasks to be shared in a safe and reliable manner. This thesis focuses on the design and development of an appropriate set of behaviour strategies for human-robot interactive control by first understanding how an equivalent human-human interaction (HHI) can be used to establish a framework for a robotic behaviour-based approach. To achieve the above goal, two preliminary HHI experimental investigations were initiated in this study. The first of which was designed to evaluate the human dynamic response using a one degree-of-freedom (DOF) HHI rectilinear test where the handler passes a compliant object to the receiver along a constrained horizontal path. The human dynamic response while executing the HHI rectilinear task has been investigated using a Box-Behnken design of experiments [Box and Hunter, 1957] and was based on the McRuer crossover model [McRuer et al. 1995].
To mimic a real-world human-human object handover task where the handler is able to pass an object to the receiver in a 3D workspace, a second more substantive one DOF HHI baton handover task has been developed. The HHI object handover tests were designed to understand the dynamic behavioural characteristics of the human participants, in which the handler was required to dexterously pass an object to the receiver in a timely and natural manner. The profiles of interactive forces between the handler and receiver were measured as a function of time, and how they are modulated whilst performing the tasks, was evaluated. Three key parameters were used to identify the physical characteristics of the human participants, including: peak interactive force (fmax), transfer time (Ttrf), and work done (W). These variables were subsequently used to design and develop an appropriate set of force and velocity control strategies for a six DOF Stäubli robot manipulator arm (TX60) working in a human-robot interactive environment. The optimal design of the software and hardware controller implementation for the robot system has been successfully established in keeping with a behaviour-based approach. External force control based on proportional plus integral (PI) and fuzzy logic control (FLC) algorithms were adopted to control the robot end effector velocity and interactive force in real-time.
ii
The results of interactive experiments with human-to-robot and robot-to-human handover tasks allowed a comparison of the PI and FLC control strategies. It can be concluded that the quantitative measurement of the performance of robot velocity and force control can be considered acceptable for human-robot interaction. These can provide effective performance during the robot-human object handover tasks, where the robot was able to successfully pass the object from/to the human in a safe, reliable and timely manner. However, after careful analysis with regard to human-robot handover test results, the FLC scheme was shown to be superior to PI control by actively compensating for the dynamics in the non-linear system and demonstrated better overall performance and stability. The FLC also shows superior performance in terms of improved sensitivity to small error changes compared to PI control, which is an advantage in establishing effective robot force control. The results of survey responses from the participants were in agreement with the parallel test outcomes, demonstrating significant satisfaction with the overall performance of the human-robot interactive system, as measured by an average rating of 4.06 on a five point scale.
In brief, this research has contributed the foundations for long-term research, particularly in the development of an interactive real-time robot-force control system, which enables the robot manipulator arm to cooperate with a human to facilitate the dextrous transfer of objects in a safe and speedy manner.Thai government and Prince of Songkla University (PSU
Autonomous Object Handover Using Wrist Tactile Information
Grasping in an uncertain environment is a topic of great
interest in robotics. In this paper we focus on the challenge of object
handover capable of coping with a wide range of different and unspecified
objects. Handover is the action of object passing an object from one agent
to another. In this work handover is performed from human to robot. We
present a robust method that relies only on the force information from
the wrist and does not use any vision and tactile information from the
fingers. By analyzing readings from a wrist force sensor, models of tactile
response for receiving and releasing an object were identified and tested
during validation experiments
Dynamic Grasping of Unknown Objects with a Multi-Fingered Hand
An important prerequisite for autonomous robots is their ability to reliably
grasp a wide variety of objects. Most state-of-the-art systems employ
specialized or simple end-effectors, such as two-jaw grippers, which severely
limit the range of objects to manipulate. Additionally, they conventionally
require a structured and fully predictable environment while the vast majority
of our world is complex, unstructured, and dynamic. This paper presents an
implementation to overcome both issues. Firstly, the integration of a
five-finger hand enhances the variety of possible grasps and manipulable
objects. This kinematically complex end-effector is controlled by a deep
learning based generative grasping network. The required virtual model of the
unknown target object is iteratively completed by processing visual sensor
data. Secondly, this visual feedback is employed to realize closed-loop servo
control which compensates for external disturbances. Our experiments on real
hardware confirm the system's capability to reliably grasp unknown dynamic
target objects without a priori knowledge of their trajectories. To the best of
our knowledge, this is the first method to achieve dynamic multi-fingered
grasping for unknown objects. A video of the experiments is available at
https://youtu.be/Ut28yM1gnvI.Comment: ICRA202
- …