8 research outputs found
Learning Singularity Avoidance
With the increase in complexity of robotic systems and the rise in non-expert
users, it can be assumed that task constraints are not explicitly known. In
tasks where avoiding singularity is critical to its success, this paper
provides an approach, especially for non-expert users, for the system to learn
the constraints contained in a set of demonstrations, such that they can be
used to optimise an autonomous controller to avoid singularity, without having
to explicitly know the task constraints. The proposed approach avoids
singularity, and thereby unpredictable behaviour when carrying out a task, by
maximising the learnt manipulability throughout the motion of the constrained
system, and is not limited to kinematic systems. Its benefits are demonstrated
through comparisons with other control policies which show that the constrained
manipulability of a system learnt through demonstration can be used to avoid
singularities in cases where these other policies would fail. In the absence of
the systems manipulability subject to a tasks constraints, the proposed
approach can be used instead to infer these with results showing errors less
than 10^-5 in 3DOF simulated systems as well as 10^-2 using a 7DOF real world
robotic system
On-The-Go Robot-to-Human Handovers with a Mobile Manipulator
Existing approaches to direct robot-to-human handovers are typically
implemented on fixed-base robot arms, or on mobile manipulators that come to a
full stop before performing the handover. We propose "on-the-go" handovers
which permit a moving mobile manipulator to hand over an object to a human
without stopping. The on-the-go handover motion is generated with a reactive
controller that allows simultaneous control of the base and the arm. In a user
study, human receivers subjectively assessed on-the-go handovers to be more
efficient, predictable, natural, better timed and safer than handovers that
implemented a "stop-and-deliver" behavior.Comment: 6 pages, 7 figures, 2 tables, submitted to RO-MAN 202
Towards safe human-to-robot handovers of unknown containers
Safe human-to-robot handovers of unknown objects require accurate estimation of hand poses and object properties, such as shape, trajectory, and weight. Accurately estimating these properties requires the use of scanned 3D object models or expensive equipment, such as motion capture systems and markers, or both. However, testing handover algorithms with robots may be dangerous for the human and, when the object is an open container with liquids, for the robot. In this paper, we propose a real-to-simulation framework to develop safe human-to-robot handovers with estimations of the physical properties of unknown cups or drinking glasses and estimations of the human hands from videos of a human manipulating the container. We complete the handover in simulation, and we estimate a region that is not occluded by the hand of the human holding the container. We also quantify the safeness of the human and object in simulation. We validate the framework using public recordings of containers manipulated before a handover and show the safeness of the handover when using noisy estimates from a range of perceptual algorithms
Object Handovers: a Review for Robotics
This article surveys the literature on human-robot object handovers. A
handover is a collaborative joint action where an agent, the giver, gives an
object to another agent, the receiver. The physical exchange starts when the
receiver first contacts the object held by the giver and ends when the giver
fully releases the object to the receiver. However, important cognitive and
physical processes begin before the physical exchange, including initiating
implicit agreement with respect to the location and timing of the exchange.
From this perspective, we structure our review into the two main phases
delimited by the aforementioned events: 1) a pre-handover phase, and 2) the
physical exchange. We focus our analysis on the two actors (giver and receiver)
and report the state of the art of robotic givers (robot-to-human handovers)
and the robotic receivers (human-to-robot handovers). We report a comprehensive
list of qualitative and quantitative metrics commonly used to assess the
interaction. While focusing our review on the cognitive level (e.g.,
prediction, perception, motion planning, learning) and the physical level
(e.g., motion, grasping, grip release) of the handover, we briefly discuss also
the concepts of safety, social context, and ergonomics. We compare the
behaviours displayed during human-to-human handovers to the state of the art of
robotic assistants, and identify the major areas of improvement for robotic
assistants to reach performance comparable to human interactions. Finally, we
propose a minimal set of metrics that should be used in order to enable a fair
comparison among the approaches.Comment: Review paper, 19 page
Human-Aware Motion Planning for Safe Human-Robot Collaboration
With the rapid adoption of robotic systems in our daily lives, robots must operate in the presence of humans in ways that improve safety and productivity. Currently, in industrial settings, human safety is ensured through physically separating the robotic system from the human. However, this greatly decreases the set of shared human-robot tasks that can be accomplished and also reduces human-robot team fluency. In recent years, robots with improved sensing capabilities have been introduced and the feasibility of humans and robots co-existing in shared spaces has become a topic of interest.
This thesis proposes a human-aware motion planning approach building on RRT-Connect, dubbed Human-Aware RRT-Connect, that plans in the presence of humans. The planner considers a composite cost function that includes human separation distance and visibility costs to ensure the robot maintains a safety distance during motion while being as visible as possible to the human. A danger criterion cost considering two mutually dependent factors, human-robot center of mass distance and robot inertia, is also introduced into the cost formulation to ensure human safety during planning. A simulation study is conducted to demonstrate the planner performance. For the simulation study, the proposed Human-Aware RRT-Connect planner is evaluated against RRT-Connect through a set of problem scenarios that vary in environment and task complexity. Several human-robot configurations are tested in a shared workspace involving a simulated Franka Emika Panda arm and human model.
Through the problem scenarios, it is shown that the Human-Aware RRT-Connect planner, paired with the developed HRI costs, performs better than the baseline RRT-Connect planner with respect to a set of quantitative metrics. The paths generated by the Human-Aware RRT-Connect planner maintain larger separation distances from the human, are more visible and also safer due to the minimization of the danger criterion. It is also shown that the proposed HRI cost formulation outperforms formulations from previous work when tested with the Human-Aware RRT-Connect planner
Integrating Vision and Physical Interaction for Discovery, Segmentation and Grasping of Unknown Objects
In dieser Arbeit werden Verfahren der Bildverarbeitung und die FĂ€higkeit
humanoider Roboter, mit ihrer Umgebung physisch zu interagieren, in engem
Zusammenspiel eingesetzt, um unbekannte Objekte zu identifizieren, sie vom
Hintergrund und anderen Objekten zu trennen, und letztendlich zu greifen.
Im Verlauf dieser interaktiven Exploration werden auĂerdem Eigenschaften
des Objektes wie etwa sein Aussehen und seine Form ermittelt
Nonverbal Communication During Human-Robot Object Handover. Improving Predictability of Humanoid Robots by Gaze and Gestures in Close Interaction
Meyer zu Borgsen S. Nonverbal Communication During Human-Robot Object Handover. Improving Predictability of Humanoid Robots by Gaze and Gestures in Close Interaction. Bielefeld: UniversitĂ€t Bielefeld; 2020.This doctoral thesis investigates the influence of nonverbal communication on human-robot object handover. Handing objects to one another is an everyday activity where two individuals cooperatively interact. Such close interactions incorporate a lot of nonverbal communication in order to create alignment in space and time. Understanding and transferring communication cues to robots becomes more and more important as e.g. service robots are expected to closely interact with humans in the near future. Their tasks often include delivering and taking objects. Thus, handover scenarios play an important role in human-robot interaction. A lot of work in this field of research focuses on speed, accuracy, and predictability of the robotâs movement during object handover. Still, robots need to be enabled to closely interact with naive users and not only experts. In this work I present how nonverbal communication can be implemented in robots to facilitate smooth handovers. I conducted a study on people with different levels of experience exchanging objects with a humanoid robot. It became clear that especially users with only little experience in regard to interaction with robots rely heavily on the communication cues they are used to on the basis of former interactions with humans. I added different gestures with the second arm, not directly involved in the transfer, to analyze the influence on synchronization, predictability, and human acceptance. Handing an object has a special movement trajectory itself which has not only the purpose of bringing the object or hand to the position of exchange but also of socially signalizing the intention to exchange an object. Another common type of nonverbal communication is gaze. It allows guessing the focus of attention of an interaction partner and thus helps to predict the next action. In order to evaluate handover interaction performance between human and robot, I applied the developed concepts to the humanoid robot Meka M1. By adding the humanoid robot head named Floka Head to the system, I created the Floka humanoid, to implement gaze strategies that aim to increase predictability and user comfort. This thesis contributes to the field of human-robot object handover by presenting study outcomes and concepts along with an implementation of improved software modules resulting in a fully functional object handing humanoid robot from perception and prediction capabilities to behaviors enhanced and improved by features of nonverbal communication