890 research outputs found

    User Study Exploring the Role of Explanation of Failures by Robots in Human Robot Collaboration Tasks

    Full text link
    Despite great advances in what robots can do, they still experience failures in human-robot collaborative tasks due to high randomness in unstructured human environments. Moreover, a human's unfamiliarity with a robot and its abilities can cause such failures to repeat. This makes the ability to failure explanation very important for a robot. In this work, we describe a user study that incorporated different robotic failures in a human-robot collaboration (HRC) task aimed at filling a shelf. We included different types of failures and repeated occurrences of such failures in a prolonged interaction between humans and robots. The failure resolution involved human intervention in form of human-robot bidirectional handovers. Through such studies, we aim to test different explanation types and explanation progression in the interaction and record humans.Comment: Contributed to the: "The Imperfectly Relatable Robot: An interdisciplinary workshop on the role of failure in HRI", ACM/IEEE International Conference on Human-Robot Interaction HRI 2023. Video can be found at: https://sites.google.com/view/hri-failure-ws/teaser-video

    Robot to Human Object Handover using Vision and Joint Torque Sensor Modalities

    Full text link
    We present a robot-to-human object handover algorithm and implement it on a 7-DOF arm equipped with a 3-finger mechanical hand. The system performs a fully autonomous and robust object handover to a human receiver in real-time. Our algorithm relies on two complementary sensor modalities: joint torque sensors on the arm and an eye-in-hand RGB-D camera for sensor feedback. Our approach is entirely implicit, i.e., there is no explicit communication between the robot and the human receiver. Information obtained via the aforementioned sensor modalities is used as inputs to their related deep neural networks. While the torque sensor network detects the human receiver's "intention" such as: pull, hold, or bump, the vision sensor network detects if the receiver's fingers have wrapped around the object. Networks' outputs are then fused, based on which a decision is made to either release the object or not. Despite substantive challenges in sensor feedback synchronization, object, and human hand detection, our system achieves robust robot-to-human handover with 98\% accuracy in our preliminary real experiments using human receivers.Comment: Note: This paper is submitted to RITA 2022 conference and waiting for result

    Vision-based deep execution monitoring

    Full text link
    Execution monitor of high-level robot actions can be effectively improved by visual monitoring the state of the world in terms of preconditions and postconditions that hold before and after the execution of an action. Furthermore a policy for searching where to look at, either for verifying the relations that specify the pre and postconditions or to refocus in case of a failure, can tremendously improve the robot execution in an uncharted environment. It is now possible to strongly rely on visual perception in order to make the assumption that the environment is observable, by the amazing results of deep learning. In this work we present visual execution monitoring for a robot executing tasks in an uncharted Lab environment. The execution monitor interacts with the environment via a visual stream that uses two DCNN for recognizing the objects the robot has to deal with and manipulate, and a non-parametric Bayes estimation to discover the relations out of the DCNN features. To recover from lack of focus and failures due to missed objects we resort to visual search policies via deep reinforcement learning

    Handover Control for Human-Robot and Robot-Robot Collaboration

    Get PDF
    Modern scenarios in robotics involve human-robot collaboration or robot-robot cooperation in unstructured environments. In human-robot collaboration, the objective is to relieve humans from repetitive and wearing tasks. This is the case of a retail store, where the robot could help a clerk to refill a shelf or an elderly customer to pick an item from an uncomfortable location. In robot-robot cooperation, automated logistics scenarios, such as warehouses, distribution centers and supermarkets, often require repetitive and sequential pick and place tasks that can be executed more efficiently by exchanging objects between robots, provided that they are endowed with object handover ability. Use of a robot for passing objects is justified only if the handover operation is sufficiently intuitive for the involved humans, fluid and natural, with a speed comparable to that typical of a human-human object exchange. The approach proposed in this paper strongly relies on visual and haptic perception combined with suitable algorithms for controlling both robot motion, to allow the robot to adapt to human behavior, and grip force, to ensure a safe handover. The control strategy combines model-based reactive control methods with an event-driven state machine encoding a human-inspired behavior during a handover task, which involves both linear and torsional loads, without requiring explicit learning from human demonstration. Experiments in a supermarket-like environment with humans and robots communicating only through haptic cues demonstrate the relevance of force/tactile feedback in accomplishing handover operations in a collaborative task

    Visual search and recognition for robot task execution and monitoring

    Full text link
    Visual search of relevant targets in the environment is a crucial robot skill. We propose a preliminary framework for the execution monitor of a robot task, taking care of the robot attitude to visually searching the environment for targets involved in the task. Visual search is also relevant to recover from a failure. The framework exploits deep reinforcement learning to acquire a "common sense" scene structure and it takes advantage of a deep convolutional network to detect objects and relevant relations holding between them. The framework builds on these methods to introduce a vision-based execution monitoring, which uses classical planning as a backbone for task execution. Experiments show that with the proposed vision-based execution monitor the robot can complete simple tasks and can recover from failures in autonomy

    Object-Independent Human-to-Robot Handovers using Real Time Robotic Vision

    Full text link
    We present an approach for safe and object-independent human-to-robot handovers using real time robotic vision and manipulation. We aim for general applicability with a generic object detector, a fast grasp selection algorithm and by using a single gripper-mounted RGB-D camera, hence not relying on external sensors. The robot is controlled via visual servoing towards the object of interest. Putting a high emphasis on safety, we use two perception modules: human body part segmentation and hand/finger segmentation. Pixels that are deemed to belong to the human are filtered out from candidate grasp poses, hence ensuring that the robot safely picks the object without colliding with the human partner. The grasp selection and perception modules run concurrently in real-time, which allows monitoring of the progress. In experiments with 13 objects, the robot was able to successfully take the object from the human in 81.9% of the trials.Comment: IEEE Robotics and Automation Letters (RA-L). Preprint Version. Accepted September, 2020. The code and videos can be found at https://patrosat.github.io/h2r_handovers

    Dynamic Grasping of Unknown Objects with a Multi-Fingered Hand

    Full text link
    An important prerequisite for autonomous robots is their ability to reliably grasp a wide variety of objects. Most state-of-the-art systems employ specialized or simple end-effectors, such as two-jaw grippers, which severely limit the range of objects to manipulate. Additionally, they conventionally require a structured and fully predictable environment while the vast majority of our world is complex, unstructured, and dynamic. This paper presents an implementation to overcome both issues. Firstly, the integration of a five-finger hand enhances the variety of possible grasps and manipulable objects. This kinematically complex end-effector is controlled by a deep learning based generative grasping network. The required virtual model of the unknown target object is iteratively completed by processing visual sensor data. Secondly, this visual feedback is employed to realize closed-loop servo control which compensates for external disturbances. Our experiments on real hardware confirm the system's capability to reliably grasp unknown dynamic target objects without a priori knowledge of their trajectories. To the best of our knowledge, this is the first method to achieve dynamic multi-fingered grasping for unknown objects. A video of the experiments is available at https://youtu.be/Ut28yM1gnvI.Comment: ICRA202

    Affordance-Aware Handovers With Human Arm Mobility Constraints

    Get PDF
    Reasoning about object handover configurations allows an assistive agent to estimate the appropriateness of handover for a receiver with different arm mobility capacities. While there are existing approaches for estimating the effectiveness of handovers, their findings are limited to users without arm mobility impairments and to specific objects. Therefore, current state-of-the-art approaches are unable to hand over novel objects to receivers with different arm mobility capacities. We propose a method that generalises handover behaviours to previously unseen objects, subject to the constraint of a user's arm mobility levels and the task context. We propose a heuristic-guided hierarchically optimised cost whose optimisation adapts object configurations for receivers with low arm mobility. This also ensures that the robot grasps consider the context of the user's upcoming task, i.e., the usage of the object. To understand preferences over handover configurations, we report on the findings of an online study, wherein we presented different handover methods, including ours, to 259259 users with different levels of arm mobility. We find that people's preferences over handover methods are correlated to their arm mobility capacities. We encapsulate these preferences in a statistical relational model (SRL) that is able to reason about the most suitable handover configuration given a receiver's arm mobility and upcoming task. Using our SRL model, we obtained an average handover accuracy of 90.8%90.8\% when generalising handovers to novel objects.Comment: Accepted for RA-L 202

    Trust-Based Control of Robotic Manipulators in Collaborative Assembly in Manufacturing

    Get PDF
    Human-robot interaction (HRI) is vastly addressed in the field of automation and manufacturing. Most of the HRI literature in manufacturing explored physical human-robot interaction (pHRI) and invested in finding means for ensuring safety and optimized effort sharing amongst a team of humans and robots. The recent emergence of safe, lightweight, and human-friendly robots has opened a new realm for human-robot collaboration (HRC) in collaborative manufacturing. For such robots with the new HRI functionalities to interact closely and effectively with a human coworker, new human-centered controllers that integrate both physical and social interaction are demanded. Social human-robot interaction (sHRI) has been demonstrated in robots with affective abilities in education, social services, health care, and entertainment. Nonetheless, sHRI should not be limited only to those areas. In particular, we focus on human trust in robot as a basis of social interaction. Human trust in robot and robot anthropomorphic features have high impacts on sHRI. Trust is one of the key factors in sHRI and a prerequisite for effective HRC. Trust characterizes the reliance and tendency of human in using robots. Factors within a robotic system (e.g. performance, reliability, or attribute), the task, and the surrounding environment can all impact the trust dynamically. Over-reliance or under-reliance might occur due to improper trust, which results in poor team collaboration, and hence higher task load and lower overall task performance. The goal of this dissertation is to develop intelligent control algorithms for the manipulator robots that integrate both physical and social HRI factors in the collaborative manufacturing. First, the evolution of human trust in a collaborative robot model is identified and verified through a series of human-in-the-loop experiments. This model serves as a computational trust model estimating an objective criterion for the evolution of human trust in robot rather than estimating an individual\u27s actual level of trust. Second, an HRI-based framework is developed for controlling the speed of a robot performing pick and place tasks. The impact of the consideration of the different level of interaction in the robot controller on the overall efficiency and HRI criteria such as human perceived workload and trust and robot usability is studied using a series of human-in-the-loop experiments. Third, an HRI-based framework is developed for planning and controlling the robot motion in performing hand-over tasks to the human. Again, series of human-in-the-loop experimental studies are conducted to evaluate the impact of implementation of the frameworks on overall efficiency and HRI criteria such as human workload and trust and robot usability. Finally, another framework is proposed for the cooperative manipulation of a common object by a team of a human and a robot. This framework proposes a trust-based role allocation strategy for adjusting the proactive behavior of the robot performing a cooperative manipulation task in HRC scenarios. For the mentioned frameworks, the results of the experiments show that integrating HRI in the robot controller leads to a lower human workload while it maintains a threshold level of human trust in robot and does not degrade robot usability and efficiency
    • …
    corecore