37,779 research outputs found

    Flexible human-robot cooperation models for assisted shop-floor tasks

    Get PDF
    The Industry 4.0 paradigm emphasizes the crucial benefits that collaborative robots, i.e., robots able to work alongside and together with humans, could bring to the whole production process. In this context, an enabling technology yet unreached is the design of flexible robots able to deal at all levels with humans' intrinsic variability, which is not only a necessary element for a comfortable working experience for the person but also a precious capability for efficiently dealing with unexpected events. In this paper, a sensing, representation, planning and control architecture for flexible human-robot cooperation, referred to as FlexHRC, is proposed. FlexHRC relies on wearable sensors for human action recognition, AND/OR graphs for the representation of and reasoning upon cooperation models, and a Task Priority framework to decouple action planning from robot motion planning and control.Comment: Submitted to Mechatronics (Elsevier

    Coordination with Humans via Strategy Matching

    Full text link
    Human and robot partners increasingly need to work together to perform tasks as a team. Robots designed for such collaboration must reason about how their task-completion strategies interplay with the behavior and skills of their human team members as they coordinate on achieving joint goals. Our goal in this work is to develop a computational framework for robot adaptation to human partners in human-robot team collaborations. We first present an algorithm for autonomously recognizing available task-completion strategies by observing human-human teams performing a collaborative task. By transforming team actions into low dimensional representations using hidden Markov models, we can identify strategies without prior knowledge. Robot policies are learned on each of the identified strategies to construct a Mixture-of-Experts model that adapts to the task strategies of unseen human partners. We evaluate our model on a collaborative cooking task using an Overcooked simulator. Results of an online user study with 125 participants demonstrate that our framework improves the task performance and collaborative fluency of human-agent teams, as compared to state of the art reinforcement learning methods.Comment: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022

    When Humans Aren't Optimal: Robots that Collaborate with Risk-Aware Humans

    Full text link
    In order to collaborate safely and efficiently, robots need to anticipate how their human partners will behave. Some of today's robots model humans as if they were also robots, and assume users are always optimal. Other robots account for human limitations, and relax this assumption so that the human is noisily rational. Both of these models make sense when the human receives deterministic rewards: i.e., gaining either 100or100 or 130 with certainty. But in real world scenarios, rewards are rarely deterministic. Instead, we must make choices subject to risk and uncertainty--and in these settings, humans exhibit a cognitive bias towards suboptimal behavior. For example, when deciding between gaining 100withcertaintyor100 with certainty or 130 only 80% of the time, people tend to make the risk-averse choice--even though it leads to a lower expected gain! In this paper, we adopt a well-known Risk-Aware human model from behavioral economics called Cumulative Prospect Theory and enable robots to leverage this model during human-robot interaction (HRI). In our user studies, we offer supporting evidence that the Risk-Aware model more accurately predicts suboptimal human behavior. We find that this increased modeling accuracy results in safer and more efficient human-robot collaboration. Overall, we extend existing rational human models so that collaborative robots can anticipate and plan around suboptimal human behavior during HRI.Comment: ACM/IEEE International Conference on Human-Robot Interactio

    A Hierarchical Architecture for Flexible Human-Robot Collaboration

    Get PDF
    This thesis is devoted to design a software architecture for Human- Robot Collaboration (HRC), to enhance the robots\u2019 abilities for working alongside humans. We propose FlexHRC, a hierarchical and flexible human-robot cooperation architecture specifically designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in tasks with high-variability. Along with FlexHRC, we have introduced novel techniques appropriate for three interleaved levels, namely perception, representation, and action, each one aimed at addressing specific traits of humanrobot cooperation tasks. The Industry 4.0 paradigm emphasizes the crucial benefits that collaborative robots could bring to the whole production process. In this context, a yet unreached enabling technology is the design of robots able to deal at all levels with humans\u2019 intrinsic variability, which is not only a necessary element to a comfortable working experience for humans but also a precious capability for efficiently dealing with unexpected events. Moreover, a flexible assembly of semi-finished products is one of the expected features of next-generation shop-floor lines. Currently, such flexibility is placed on the shoulders of human operators, who are responsible for product variability, and therefore they are subject to potentially high stress levels and cognitive load when dealing with complex operations. At the same time, operations in the shop-floor are still very structured and well-defined. Collaborative robots have been designed to allow for a transition of such burden from human operators to robots that are flexible enough to support them in high-variability tasks while they unfold. As mentioned before, FlexHRC architecture encompasses three perception, action, and representation levels. The perception level relies on wearable sensors for human action recognition and point cloud data for perceiving the object in the scene. The action level embraces four components, the robot execution manager for decoupling action planning from robot motion planning and mapping the symbolic actions to the robot controller command interface, a task Priority framework to control the robot, a differential equation solver to simulate and evaluate the robot behaviour on-the-fly, and finally a random-based method for the robot path planning. The representation level depends on AND/OR graphs for the representation of and the reasoning upon human-robot cooperation models online, a task manager to plan, adapt, and make decision for the robot behaviors, and a knowledge base in order to store the cooperation and workspace information. We evaluated the FlexHRC functionalities according to the application desired objectives. This evaluation is accompanied with several experiments, namely collaborative screwing task, coordinated transportation of the objects in cluttered environment, collaborative table assembly task, and object positioning tasks. The main contributions of this work are: (i) design and implementation of FlexHRC which enables the functional requirements necessary for the shop-floor assembly application such as task and team level flexibility, scalability, adaptability, and safety just a few to name, (ii) development of the task representation, which integrates a hierarchical AND/OR graph whose online behaviour is formally specified using First Order Logic, (iii) an in-the-loop simulation-based decision making process for the operations of collaborative robots coping with the variability of human operator actions, (iv) the robot adaptation to the human on-the-fly decisions and actions via human action recognition, and (v) the predictable robot behavior to the human user thanks to the task priority based control frame, the introduced path planner, and the natural and intuitive communication of the robot with the human

    Towards the Safety of Human-in-the-Loop Robotics: Challenges and Opportunities for Safety Assurance of Robotic Co-Workers

    Get PDF
    The success of the human-robot co-worker team in a flexible manufacturing environment where robots learn from demonstration heavily relies on the correct and safe operation of the robot. How this can be achieved is a challenge that requires addressing both technical as well as human-centric research questions. In this paper we discuss the state of the art in safety assurance, existing as well as emerging standards in this area, and the need for new approaches to safety assurance in the context of learning machines. We then focus on robotic learning from demonstration, the challenges these techniques pose to safety assurance and indicate opportunities to integrate safety considerations into algorithms "by design". Finally, from a human-centric perspective, we stipulate that, to achieve high levels of safety and ultimately trust, the robotic co-worker must meet the innate expectations of the humans it works with. It is our aim to stimulate a discussion focused on the safety aspects of human-in-the-loop robotics, and to foster multidisciplinary collaboration to address the research challenges identified

    Challenges in Collaborative HRI for Remote Robot Teams

    Get PDF
    Collaboration between human supervisors and remote teams of robots is highly challenging, particularly in high-stakes, distant, hazardous locations, such as off-shore energy platforms. In order for these teams of robots to truly be beneficial, they need to be trusted to operate autonomously, performing tasks such as inspection and emergency response, thus reducing the number of personnel placed in harm's way. As remote robots are generally trusted less than robots in close-proximity, we present a solution to instil trust in the operator through a `mediator robot' that can exhibit social skills, alongside sophisticated visualisation techniques. In this position paper, we present general challenges and then take a closer look at one challenge in particular, discussing an initial study, which investigates the relationship between the level of control the supervisor hands over to the mediator robot and how this affects their trust. We show that the supervisor is more likely to have higher trust overall if their initial experience involves handing over control of the emergency situation to the robotic assistant. We discuss this result, here, as well as other challenges and interaction techniques for human-robot collaboration.Comment: 9 pages. Peer reviewed position paper accepted in the CHI 2019 Workshop: The Challenges of Working on Social Robots that Collaborate with People (SIRCHI2019), ACM CHI Conference on Human Factors in Computing Systems, May 2019, Glasgow, U
    corecore