320 research outputs found

    Object Transfer Point Estimation for Prompt Human to Robot Handovers

    Get PDF
    Handing over objects is the foundation of many human-robot interaction and collaboration tasks. In the scenario where a human is handing over an object to a robot, the human chooses where the object needs to be transferred. The robot needs to accurately predict this point of transfer to reach out proactively, instead of waiting for the final position to be presented. We first conduct a human-to-robot handover motion study to analyze the effect of user height, arm length, position, orientation and robot gaze on the object transfer point. Our study presents new observations on the effect of robot\u27s gaze on the point of object transfer. Next, we present an efficient method for predicting the Object Transfer Point (OTP), which synthesizes (1) an offline OTP calculated based on human preferences observed in the human-robot motion study with (2) a dynamic OTP predicted based on the observed human motion. Our proposed OTP predictor is implemented on a humanoid nursing robot and experimentally validated in human-robot handover tasks. Compared to using only static or dynamic OTP estimators, it has better accuracy at the earlier phase of handover (up to 45% of the handover motion) and can render fluent handovers with a reach-to-grasp response time (about 3.1 secs) close to natural human receiver\u27s response. In addition, the OTP prediction accuracy is maintained across the robot\u27s visible workspace by utilizing a user-adaptive reference frame

    Object Handovers: a Review for Robotics

    Full text link
    This article surveys the literature on human-robot object handovers. A handover is a collaborative joint action where an agent, the giver, gives an object to another agent, the receiver. The physical exchange starts when the receiver first contacts the object held by the giver and ends when the giver fully releases the object to the receiver. However, important cognitive and physical processes begin before the physical exchange, including initiating implicit agreement with respect to the location and timing of the exchange. From this perspective, we structure our review into the two main phases delimited by the aforementioned events: 1) a pre-handover phase, and 2) the physical exchange. We focus our analysis on the two actors (giver and receiver) and report the state of the art of robotic givers (robot-to-human handovers) and the robotic receivers (human-to-robot handovers). We report a comprehensive list of qualitative and quantitative metrics commonly used to assess the interaction. While focusing our review on the cognitive level (e.g., prediction, perception, motion planning, learning) and the physical level (e.g., motion, grasping, grip release) of the handover, we briefly discuss also the concepts of safety, social context, and ergonomics. We compare the behaviours displayed during human-to-human handovers to the state of the art of robotic assistants, and identify the major areas of improvement for robotic assistants to reach performance comparable to human interactions. Finally, we propose a minimal set of metrics that should be used in order to enable a fair comparison among the approaches.Comment: Review paper, 19 page

    Probabilistic movement primitives for coordination of multiple human–robot collaborative tasks

    Get PDF
    This paper proposes an interaction learning method for collaborative and assistive robots based on movement primitives. The method allows for both action recognition and human–robot movement coordination. It uses imitation learning to construct a mixture model of human–robot interaction primitives. This probabilistic model allows the assistive trajectory of the robot to be inferred from human observations. The method is scalable in relation to the number of tasks and can learn nonlinear correlations between the trajectories that describe the human–robot interaction. We evaluated the method experimentally with a lightweight robot arm in a variety of assistive scenarios, including the coordinated handover of a bottle to a human, and the collaborative assembly of a toolbox. Potential applications of the method are personal caregiver robots, control of intelligent prosthetic devices, and robot coworkers in factories

    Synergy-Based Human Grasp Representations and Semi-Autonomous Control of Prosthetic Hands

    Get PDF
    Das sichere und stabile Greifen mit humanoiden RoboterhĂ€nden stellt eine große Herausforderung dar. Diese Dissertation befasst sich daher mit der Ableitung von Greifstrategien fĂŒr RoboterhĂ€nde aus der Beobachtung menschlichen Greifens. Dabei liegt der Fokus auf der Betrachtung des gesamten Greifvorgangs. Dieser umfasst zum einen die Hand- und Fingertrajektorien wĂ€hrend des Greifprozesses und zum anderen die Kontaktpunkte sowie den Kraftverlauf zwischen Hand und Objekt vom ersten Kontakt bis zum statisch stabilen Griff. Es werden nichtlineare posturale Synergien und Kraftsynergien menschlicher Griffe vorgestellt, die die Generierung menschenĂ€hnlicher Griffposen und GriffkrĂ€fte erlauben. Weiterhin werden Synergieprimitive als adaptierbare ReprĂ€sentation menschlicher Greifbewegungen entwickelt. Die beschriebenen, vom Menschen gelernten Greifstrategien werden fĂŒr die Steuerung robotischer ProthesenhĂ€nde angewendet. Im Rahmen einer semi-autonomen Steuerung werden menschenĂ€hnliche Greifbewegungen situationsgerecht vorgeschlagen und vom Nutzenden der Prothese ĂŒberwacht

    Goal Set Inverse Optimal Control and Iterative Re-planning for Predicting Human Reaching Motions in Shared Workspaces

    Full text link
    To enable safe and efficient human-robot collaboration in shared workspaces it is important for the robot to predict how a human will move when performing a task. While predicting human motion for tasks not known a priori is very challenging, we argue that single-arm reaching motions for known tasks in collaborative settings (which are especially relevant for manufacturing) are indeed predictable. Two hypotheses underlie our approach for predicting such motions: First, that the trajectory the human performs is optimal with respect to an unknown cost function, and second, that human adaptation to their partner's motion can be captured well through iterative re-planning with the above cost function. The key to our approach is thus to learn a cost function which "explains" the motion of the human. To do this, we gather example trajectories from pairs of participants performing a collaborative assembly task using motion capture. We then use Inverse Optimal Control to learn a cost function from these trajectories. Finally, we predict reaching motions from the human's current configuration to a task-space goal region by iteratively re-planning a trajectory using the learned cost function. Our planning algorithm is based on the trajectory optimizer STOMP, it plans for a 23 DoF human kinematic model and accounts for the presence of a moving collaborator and obstacles in the environment. Our results suggest that in most cases, our method outperforms baseline methods when predicting motions. We also show that our method outperforms baselines for predicting human motion when a human and a robot share the workspace.Comment: 12 pages, Accepted for publication IEEE Transaction on Robotics 201

    Towards Seamless Mobility: An IEEE 802.21 Practical Approach

    Get PDF
    In the recent years, mobile devices such as cell phones, notebook or ultra mobile computers and videogame consoles are experiencing an impressive evolution in terms of hardware and software possibilities. Elements such a wideband Internet connection allows a broad range of possibilities for creative developers. Many of these possibilities can include applications requiring continuity of service when the user moves form a coverage area to another. Nowadays, mobile devices are equipped with one or more radio interfaces such as GSM, UMTS, WiMax or Wi‐ Fi. Many of these technologies are ready to allow transparent roaming within their own coverage areas, but they are not ready to handle a service transfer between different technologies. In order to find a solution to this issue, the IEEE has developed a standard known as Media Independent Handover (MIH) Services with the aim of easing seamless mobility between these technologies. The present work has been centered in developing a system capable to enable a service of mobility under the terms specified in the stated standard. The development of a platform aiming to provide service continuity is mandatory, being a cross‐layer solution based in elements from link and network layers supplying a transparent roaming mechanism from user’s point of view. Two applications have been implemented in C/C++ language under a Linux environment. One application is designed to work within a mobile device, and the other one in the network access point. The mobile device basically consists in a notebook equipped with two Wi‐Fi interfaces, which is not a common feature in commercial devices, allowing seamless communication transfers aided by the application. Network access points are computers equipped with a Wi‐Fi interface and configured to provide Internet wireless access and services of mobility. In order to test the operation, a test‐bed has been implemented. It consists on a pair of access points connected through a network and placed within partially overlapped coverage areas, and a mobile device, all of them properly set. The mobile detects the networks that are compatible and gets attached to the one that provides better conditions for the demanded service. When the service degrades up to certain level, the mobile transfers the communication to the other access point, which offers better service conditions. Finally, in order to check if the changes have been done properly, the duration of the required actions has been measured, as well as the data that can have been lost or buffered meanwhile. The result is a MIH‐alike system working in a proper way. The discovery and selection of a destination network is correct and is done before the old connection gets too degraded, providing seamless mobility. The measured latencies and packet losses are affordable in terms of MIH protocol, but require future work improvements in terms of network protocols that have not been considered under the scope of this work

    Composite dynamic movement primitives based on neural networks for human–robot skill transfer

    Get PDF
    In this paper, composite dynamic movement primitives (DMPs) based on radial basis function neural networks (RBFNNs) are investigated for robots’ skill learning from human demonstrations. The composite DMPs could encode the position and orientation manipulation skills simultaneously for human-to-robot skills transfer. As the robot manipulator is expected to perform tasks in unstructured and uncertain environments, it requires the manipulator to own the adaptive ability to adjust its behaviours to new situations and environments. Since the DMPs can adapt to uncertainties and perturbation, and spatial and temporal scaling, it has been successfully employed for various tasks, such as trajectory planning and obstacle avoidance. However, the existing skill model mainly focuses on position or orientation modelling separately; it is a common constraint in terms of position and orientation simultaneously in practice. Besides, the generalisation of the skill learning model based on DMPs is still hard to deal with dynamic tasks, e.g., reaching a moving target and obstacle avoidance. In this paper, we proposed a composite DMPs-based framework representing position and orientation simultaneously for robot skill acquisition and the neural networks technique is used to train the skill model. The effectiveness of the proposed approach is validated by simulation and experiments
    • 

    corecore