45 research outputs found
Object Handovers: a Review for Robotics
This article surveys the literature on human-robot object handovers. A
handover is a collaborative joint action where an agent, the giver, gives an
object to another agent, the receiver. The physical exchange starts when the
receiver first contacts the object held by the giver and ends when the giver
fully releases the object to the receiver. However, important cognitive and
physical processes begin before the physical exchange, including initiating
implicit agreement with respect to the location and timing of the exchange.
From this perspective, we structure our review into the two main phases
delimited by the aforementioned events: 1) a pre-handover phase, and 2) the
physical exchange. We focus our analysis on the two actors (giver and receiver)
and report the state of the art of robotic givers (robot-to-human handovers)
and the robotic receivers (human-to-robot handovers). We report a comprehensive
list of qualitative and quantitative metrics commonly used to assess the
interaction. While focusing our review on the cognitive level (e.g.,
prediction, perception, motion planning, learning) and the physical level
(e.g., motion, grasping, grip release) of the handover, we briefly discuss also
the concepts of safety, social context, and ergonomics. We compare the
behaviours displayed during human-to-human handovers to the state of the art of
robotic assistants, and identify the major areas of improvement for robotic
assistants to reach performance comparable to human interactions. Finally, we
propose a minimal set of metrics that should be used in order to enable a fair
comparison among the approaches.Comment: Review paper, 19 page
Object Transfer Point Estimation for Prompt Human to Robot Handovers
Handing over objects is the foundation of many human-robot interaction and collaboration tasks. In the scenario where a human is handing over an object to a robot, the human chooses where the object needs to be transferred. The robot needs to accurately predict this point of transfer to reach out proactively, instead of waiting for the final position to be presented. We first conduct a human-to-robot handover motion study to analyze the effect of user height, arm length, position, orientation and robot gaze on the object transfer point. Our study presents new observations on the effect of robot\u27s gaze on the point of object transfer. Next, we present an efficient method for predicting the Object Transfer Point (OTP), which synthesizes (1) an offline OTP calculated based on human preferences observed in the human-robot motion study with (2) a dynamic OTP predicted based on the observed human motion. Our proposed OTP predictor is implemented on a humanoid nursing robot and experimentally validated in human-robot handover tasks. Compared to using only static or dynamic OTP estimators, it has better accuracy at the earlier phase of handover (up to 45% of the handover motion) and can render fluent handovers with a reach-to-grasp response time (about 3.1 secs) close to natural human receiver\u27s response. In addition, the OTP prediction accuracy is maintained across the robot\u27s visible workspace by utilizing a user-adaptive reference frame
Nonverbal Communication During Human-Robot Object Handover. Improving Predictability of Humanoid Robots by Gaze and Gestures in Close Interaction
Meyer zu Borgsen S. Nonverbal Communication During Human-Robot Object Handover. Improving Predictability of Humanoid Robots by Gaze and Gestures in Close Interaction. Bielefeld: Universität Bielefeld; 2020.This doctoral thesis investigates the influence of nonverbal communication on human-robot object handover. Handing objects to one another is an everyday activity where two individuals cooperatively interact. Such close interactions incorporate a lot of nonverbal communication in order to create alignment in space and time. Understanding and transferring communication cues to robots becomes more and more important as e.g. service robots are expected to closely interact with humans in the near future. Their tasks often include delivering and taking objects. Thus, handover scenarios play an important role in human-robot interaction. A lot of work in this field of research focuses on speed, accuracy, and predictability of the robot’s movement during object handover. Still, robots need to be enabled to closely interact with naive users and not only experts. In this work I present how nonverbal communication can be implemented in robots to facilitate smooth handovers. I conducted a study on people with different levels of experience exchanging objects with a humanoid robot. It became clear that especially users with only little experience in regard to interaction with robots rely heavily on the communication cues they are used to on the basis of former interactions with humans. I added different gestures with the second arm, not directly involved in the transfer, to analyze the influence on synchronization, predictability, and human acceptance. Handing an object has a special movement trajectory itself which has not only the purpose of bringing the object or hand to the position of exchange but also of socially signalizing the intention to exchange an object. Another common type of nonverbal communication is gaze. It allows guessing the focus of attention of an interaction partner and thus helps to predict the next action. In order to evaluate handover interaction performance between human and robot, I applied the developed concepts to the humanoid robot Meka M1. By adding the humanoid robot head named Floka Head to the system, I created the Floka humanoid, to implement gaze strategies that aim to increase predictability and user comfort. This thesis contributes to the field of human-robot object handover by presenting study outcomes and concepts along with an implementation of improved software modules resulting in a fully functional object handing humanoid robot from perception and prediction capabilities to behaviors enhanced and improved by features of nonverbal communication
Safety Assessment Strategy for Collaborative Robot Installations
Industrial resource efficiency can be improved if the safety barrier between humans and robots is removed, as this enables operators and robots to work side by side or in direct collaboration to solve a task, usually referred to as a collaborative robot installation. Even though technology development makes the barrier removal ever more feasible from a safety perspective, this still produces a possible hazardous working environment, and safety assessment strategies are crucial. A wide area of knowledge is required to assess all fields that can help ensure safe human-machine interaction. Here the focus is primarily on providing a description of the key fields identified, including how operators psychologically accept working with robots, and providing a cursory description of the research front for each individual field. In addition to covering a large number of parameters, the assessment strategy also needs to be cost-effective. A significant part of all parameters that can be considered when attempting to produce optimized and cost-effective collaborative robot installations will also have a direct impact on operator safety. Hence, assessments for safety, and assessments for cost-effectiveness, cannot be separated, and are treated as two objectives that need to be viewed in sync
Multi-Modal Trip Hazard Affordance Detection On Construction Sites
Trip hazards are a significant contributor to accidents on construction and
manufacturing sites, where over a third of Australian workplace injuries occur
[1]. Current safety inspections are labour intensive and limited by human
fallibility,making automation of trip hazard detection appealing from both a
safety and economic perspective. Trip hazards present an interesting challenge
to modern learning techniques because they are defined as much by affordance as
by object type; for example wires on a table are not a trip hazard, but can be
if lying on the ground. To address these challenges, we conduct a comprehensive
investigation into the performance characteristics of 11 different colour and
depth fusion approaches, including 4 fusion and one non fusion approach; using
colour and two types of depth images. Trained and tested on over 600 labelled
trip hazards over 4 floors and 2000m in an active construction
site,this approach was able to differentiate between identical objects in
different physical configurations (see Figure 1). Outperforming a colour-only
detector, our multi-modal trip detector fuses colour and depth information to
achieve a 4% absolute improvement in F1-score. These investigative results and
the extensive publicly available dataset moves us one step closer to assistive
or fully automated safety inspection systems on construction sites.Comment: 9 Pages, 12 Figures, 2 Tables, Accepted to Robotics and Automation
Letters (RA-L
Sense, Think, Grasp: A study on visual and tactile information processing for autonomous manipulation
Interacting with the environment using hands is one of the distinctive
abilities of humans with respect to other species. This aptitude reflects on
the crucial role played by objects\u2019 manipulation in the world that we have
shaped for us. With a view of bringing robots outside industries for supporting
people during everyday life, the ability of manipulating objects
autonomously and in unstructured environments is therefore one of the basic
skills they need. Autonomous manipulation is characterized by great
complexity especially regarding the processing of sensors information to
perceive the surrounding environment. Humans rely on vision for wideranging
tridimensional information, prioprioception for the awareness of
the relative position of their own body in the space and the sense of touch
for local information when physical interaction with objects happens. The
study of autonomous manipulation in robotics aims at transferring similar
perceptive skills to robots so that, combined with state of the art control
techniques, they could be able to achieve similar performance in manipulating
objects. The great complexity of this task makes autonomous
manipulation one of the open problems in robotics that has been drawing
increasingly the research attention in the latest years.
In this work of Thesis, we propose possible solutions to some key components
of autonomous manipulation, focusing in particular on the perception
problem and testing the developed approaches on the humanoid robotic platform iCub. When available, vision is the first source of information
to be processed for inferring how to interact with objects. The object
modeling and grasping pipeline based on superquadric functions we designed
meets this need, since it reconstructs the object 3D model from partial
point cloud and computes a suitable hand pose for grasping the object.
Retrieving objects information with touch sensors only is a relevant skill
that becomes crucial when vision is occluded, as happens for instance during
physical interaction with the object. We addressed this problem with
the design of a novel tactile localization algorithm, named Memory Unscented
Particle Filter, capable of localizing and recognizing objects relying solely
on 3D contact points collected on the object surface. Another key point of
autonomous manipulation we report on in this Thesis work is bi-manual
coordination. The execution of more advanced manipulation tasks in fact
might require the use and coordination of two arms. Tool usage for instance
often requires a proper in-hand object pose that can be obtained via
dual-arm re-grasping. In pick-and-place tasks sometimes the initial and
target position of the object do not belong to the same arm workspace, then
requiring to use one hand for lifting the object and the other for locating it
in the new position. At this regard, we implemented a pipeline for executing
the handover task, i.e. the sequences of actions for autonomously passing an
object from one robot hand on to the other.
The contributions described thus far address specific subproblems of
the more complex task of autonomous manipulation. This actually differs
from what humans do, in that humans develop their manipulation
skills by learning through experience and trial-and-error strategy. Aproper
mathematical formulation for encoding this learning approach is given by
Deep Reinforcement Learning, that has recently proved to be successful in
many robotics applications. For this reason, in this Thesis we report also
on the six month experience carried out at Berkeley Artificial Intelligence
Research laboratory with the goal of studying Deep Reinforcement Learning
and its application to autonomous manipulation
Human-robot interaction using a behavioural control strategy
PhD ThesisA topical and important aspect of robotics research is in the area of human-robot interaction (HRI), which addresses the issue of cooperation between a human and a robot to allow tasks to be shared in a safe and reliable manner. This thesis focuses on the design and development of an appropriate set of behaviour strategies for human-robot interactive control by first understanding how an equivalent human-human interaction (HHI) can be used to establish a framework for a robotic behaviour-based approach. To achieve the above goal, two preliminary HHI experimental investigations were initiated in this study. The first of which was designed to evaluate the human dynamic response using a one degree-of-freedom (DOF) HHI rectilinear test where the handler passes a compliant object to the receiver along a constrained horizontal path. The human dynamic response while executing the HHI rectilinear task has been investigated using a Box-Behnken design of experiments [Box and Hunter, 1957] and was based on the McRuer crossover model [McRuer et al. 1995].
To mimic a real-world human-human object handover task where the handler is able to pass an object to the receiver in a 3D workspace, a second more substantive one DOF HHI baton handover task has been developed. The HHI object handover tests were designed to understand the dynamic behavioural characteristics of the human participants, in which the handler was required to dexterously pass an object to the receiver in a timely and natural manner. The profiles of interactive forces between the handler and receiver were measured as a function of time, and how they are modulated whilst performing the tasks, was evaluated. Three key parameters were used to identify the physical characteristics of the human participants, including: peak interactive force (fmax), transfer time (Ttrf), and work done (W). These variables were subsequently used to design and develop an appropriate set of force and velocity control strategies for a six DOF Stäubli robot manipulator arm (TX60) working in a human-robot interactive environment. The optimal design of the software and hardware controller implementation for the robot system has been successfully established in keeping with a behaviour-based approach. External force control based on proportional plus integral (PI) and fuzzy logic control (FLC) algorithms were adopted to control the robot end effector velocity and interactive force in real-time.
ii
The results of interactive experiments with human-to-robot and robot-to-human handover tasks allowed a comparison of the PI and FLC control strategies. It can be concluded that the quantitative measurement of the performance of robot velocity and force control can be considered acceptable for human-robot interaction. These can provide effective performance during the robot-human object handover tasks, where the robot was able to successfully pass the object from/to the human in a safe, reliable and timely manner. However, after careful analysis with regard to human-robot handover test results, the FLC scheme was shown to be superior to PI control by actively compensating for the dynamics in the non-linear system and demonstrated better overall performance and stability. The FLC also shows superior performance in terms of improved sensitivity to small error changes compared to PI control, which is an advantage in establishing effective robot force control. The results of survey responses from the participants were in agreement with the parallel test outcomes, demonstrating significant satisfaction with the overall performance of the human-robot interactive system, as measured by an average rating of 4.06 on a five point scale.
In brief, this research has contributed the foundations for long-term research, particularly in the development of an interactive real-time robot-force control system, which enables the robot manipulator arm to cooperate with a human to facilitate the dextrous transfer of objects in a safe and speedy manner.Thai government and Prince of Songkla University (PSU
Foundations of Trusted Autonomy
Trusted Autonomy; Automation Technology; Autonomous Systems; Self-Governance; Trusted Autonomous Systems; Design of Algorithms and Methodologie
Upper-Limb Kinematic Parameter Estimation and Localization Using a Compliant Robotic Manipulator
Assistive and rehabilitation robotics have gained momentum over the past decade and are expected to progress significantly in the coming years. Although relevant and promising research advances have contributed to these fields, challenges regarding intentional physical contact with humans remain. Despite being a fundamental component of assistive and rehabilitation tasks, there is an evident lack of work related to robotic manipulators that intentionally manipulate human body parts. Moreover, existing solutions involving end-effector robots are not based on accurate knowledge of human limb dimensions and their current configuration. This knowledge, which is essential for safe human–limb manipulation, depends on the grasping location and human kinematic parameters. This paper addresses the upper-limb manipulation challenge and proposes a pose estimation method using a compliant robotic manipulator. To the best of our knowledge, this is the first attempt to address this challenge. A kinesthetic-based approach enables estimation of the kinematic parameters of the human arm without integrating external sensors. The estimation method relies only on proprioceptive data obtained from a collaborative robot with a Cartesian impedance-based controller to follow a compliant trajectory that depends on human arm kinodynamics. The human arm model is a 2-degree of freedom (DoF) kinematic chain. Thus, prior knowledge of the arm's behavior and an estimation method enables estimation of the kinematic parameters. Two estimation methods are implemented and compared: i) Hough transform (HT); ii) least squares (LS). Furthermore, a resizable, sensorized dummy arm is designed for experimental validation of the proposed approach. Outcomes from six experiments with different arm lengths demonstrate the repeatability and effectiveness of the proposed methodology, which can be used in several rehabilitation robotic applications