2,283 research outputs found

    Teams organization and performance analysis in autonomous human-robot teams

    Get PDF
    This paper proposes a theory of human control of robot teams based on considering how people coordinate across different task allocations. Our current work focuses on domains such as foraging in which robots perform largely independent tasks. The present study addresses the interaction between automation and organization of human teams in controlling large robot teams performing an Urban Search and Rescue (USAR) task. We identify three subtasks: perceptual search-visual search for victims, assistance-teleoperation to assist robot, and navigation-path planning and coordination. For the studies reported here, navigation was selected for automation because it involves weak dependencies among robots making it more complex and because it was shown in an earlier experiment to be the most difficult. This paper reports an extended analysis of the two conditions from a larger four condition study. In these two "shared pool" conditions Twenty four simulated robots were controlled by teams of 2 participants. Sixty paid participants (30 teams) were recruited to perform the shared pool tasks in which participants shared control of the 24 UGVs and viewed the same screens. Groups in the manual control condition issued waypoints to navigate their robots. In the autonomy condition robots generated their own waypoints using distributed path planning. We identify three self-organizing team strategies in the shared pool condition: joint control operators share full authority over robots, mixed control in which one operator takes primary control while the other acts as an assistant, and split control in which operators divide the robots with each controlling a sub-team. Automating path planning improved system performance. Effects of team organization favored operator teams who shared authority for the pool of robots. © 2010 ACM

    Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

    Full text link
    Human-swarm interaction (HSI) involves a number of human factors impacting human behaviour throughout the interaction. As the technologies used within HSI advance, it is more tempting to increase the level of swarm autonomy within the interaction to reduce the workload on humans. Yet, the prospective negative effects of high levels of autonomy on human situational awareness can hinder this process. Flexible autonomy aims at trading-off these effects by changing the level of autonomy within the interaction when required; with mixed-initiatives combining human preferences and automation's recommendations to select an appropriate level of autonomy at a certain point of time. However, the effective implementation of mixed-initiative systems raises fundamental questions on how to combine human preferences and automation recommendations, how to realise the selected level of autonomy, and what the future impacts on the cognitive states of a human are. We explore open challenges that hamper the process of developing effective flexible autonomy. We then highlight the potential benefits of using system modelling techniques in HSI by illustrating how they provide HSI designers with an opportunity to evaluate different strategies for assessing the state of the mission and for adapting the level of autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling Conference, Canberra, Australi

    Modeling the Impact of Operator Trust on Performance in Multiple Robot Control,

    Get PDF
    We developed a system dynamics model to simulate the impact of operator trust on performance in multiple robot control. Analysis of a simulated urban search and rescue experiment showed that operators decided to manually control the robots when they lost trust in the autonomous planner that was directing the robots. Operators who rarely used manual control performed the worst. However, the operators who most frequently used manual control reported higher workload and did not perform any better than operators with moderate manual control usage. Based on these findings, we implemented a model where trust and performance form a feedback loop, in which operators perceive the performance of the system, calibrate their trust, and adjust their control of the robots. A second feedback loop incorporates the impact of trust on cognitive workload and system performance. The model was able to replicate the quantitative performance of three groups of operators within 2.3%. This model could help us gain a greater understanding of how operators build and lose trust in automation and the impact of those changes in trust on performance and workload, which is crucial to the development of future systems involving humanautomation collaboration.This research is sponsored by the Office of Naval Research and the Air Force Office of Scientific Research

    Modeling the Impact of Operator Trust on Performance in Multiple Robot Control

    Get PDF
    We developed a system dynamics model to simulate the impact of operator trust on performance in multiple robot control. Analysis of a simulated urban search and rescue experiment showed that operators decided to manually control the robots when they lost trust in the autonomous planner that was directing the robots. Operators who rarely used manual control performed the worst. However, the operators who most frequently used manual control reported higher workload and did not perform any better than operators with moderate manual control usage. Based on these findings, we implemented a model where trust and performance form a feedback loop, in which operators perceive the performance of the system, calibrate their trust, and adjust their control of the robots. A second feedback loop incorporates the impact of trust on cognitive workload and system performance. The model was able to replicate the quantitative performance of three groups of operators within 2.3%. This model could help us gain a greater understanding of how operators build and lose trust in automation and the impact of those changes in trust on performance and workload, which is crucial to the development of future systems involving human-automation collaboration

    Common Metrics for Human-Robot Interaction

    Get PDF
    This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress

    Human-robot Interaction For Multi-robot Systems

    Get PDF
    Designing an effective human-robot interaction paradigm is particularly important for complex tasks such as multi-robot manipulation that require the human and robot to work together in a tightly coupled fashion. Although increasing the number of robots can expand the area that the robots can cover within a bounded period of time, a poor human-robot interface will ultimately compromise the performance of the team of robots. However, introducing a human operator to the team of robots, does not automatically improve performance due to the difficulty of teleoperating mobile robots with manipulators. The human operator’s concentration is divided not only among multiple robots but also between controlling each robot’s base and arm. This complexity substantially increases the potential neglect time, since the operator’s inability to effectively attend to each robot during a critical phase of the task leads to a significant degradation in task performance. There are several proven paradigms for increasing the efficacy of human-robot interaction: 1) multimodal interfaces in which the user controls the robots using voice and gesture; 2) configurable interfaces which allow the user to create new commands by demonstrating them; 3) adaptive interfaces which reduce the operator’s workload as necessary through increasing robot autonomy. This dissertation presents an evaluation of the relative benefits of different types of user interfaces for multi-robot systems composed of robots with wheeled bases and three degree of freedom arms. It describes a design for constructing low-cost multi-robot manipulation systems from off the shelf parts. User expertise was measured along three axes (navigation, manipulation, and coordination), and participants who performed above threshold on two out of three dimensions on a calibration task were rated as expert. Our experiments reveal that the relative expertise of the user was the key determinant of the best performing interface paradigm for that user, indicating that good user modiii eling is essential for designing a human-robot interaction system that will be used for an extended period of time. The contributions of the dissertation include: 1) a model for detecting operator distraction from robot motion trajectories; 2) adjustable autonomy paradigms for reducing operator workload; 3) a method for creating coordinated multi-robot behaviors from demonstrations with a single robot; 4) a user modeling approach for identifying expert-novice differences from short teleoperation traces

    A Realistic Simulation for Swarm UAVs and Performance Metrics for Operator User Interfaces

    Get PDF
    Robots have been utilized to support disaster mitigation missions through exploration of areas that are either unreachable or hazardous for human rescuers [1]. The great potential for robotics in disaster mitigation has been recognized by the research community and during the last decade, a lot of research has been focused on developing robotic systems for this purpose. In this thesis, we present a description of the usage and classification of UAVs and performance metrics that affect controlling of UAVs. We also present new contributions to the UAV simulator developed by ECSL and RRL: the integration of flight dynamics of Hummingbird quadcopter, and distance optimization using a Genetic algorithm

    Human-Robot Team Performance Compared to Full Robot Autonomy in 16 Real-World Search and Rescue Missions: Adaptation of the DARPA Subterranean Challenge

    Full text link
    Human operators in human-robot teams are commonly perceived to be critical for mission success. To explore the direct and perceived impact of operator input on task success and team performance, 16 real-world missions (10 hrs) were conducted based on the DARPA Subterranean Challenge. These missions were to deploy a heterogeneous team of robots for a search task to locate and identify artifacts such as climbing rope, drills and mannequins representing human survivors. Two conditions were evaluated: human operators that could control the robot team with state-of-the-art autonomy (Human-Robot Team) compared to autonomous missions without human operator input (Robot-Autonomy). Human-Robot Teams were often in directed autonomy mode (70% of mission time), found more items, traversed more distance, covered more unique ground, and had a higher time between safety-related events. Human-Robot Teams were faster at finding the first artifact, but slower to respond to information from the robot team. In routine conditions, scores were comparable for artifacts, distance, and coverage. Reasons for intervention included creating waypoints to prioritise high-yield areas, and to navigate through error-prone spaces. After observing robot autonomy, operators reported increases in robot competency and trust, but that robot behaviour was not always transparent and understandable, even after high mission performance.Comment: Submitted to Transactions on Human-Robot Interactio

    Distributed drone base station positioning for emergency cellular networks using reinforcement learning

    Get PDF
    Due to the unpredictability of natural disasters, whenever a catastrophe happens, it is vital that not only emergency rescue teams are prepared, but also that there is a functional communication network infrastructure. Hence, in order to prevent additional losses of human lives, it is crucial that network operators are able to deploy an emergency infrastructure as fast as possible. In this sense, the deployment of an intelligent, mobile, and adaptable network, through the usage of drones—unmanned aerial vehicles—is being considered as one possible alternative for emergency situations. In this paper, an intelligent solution based on reinforcement learning is proposed in order to find the best position of multiple drone small cells (DSCs) in an emergency scenario. The proposed solution’s main goal is to maximize the amount of users covered by the system, while drones are limited by both backhaul and radio access network constraints. Results show that the proposed Q-learning solution largely outperforms all other approaches with respect to all metrics considered. Hence, intelligent DSCs are considered a good alternative in order to enable the rapid and efficient deployment of an emergency communication network
    • …
    corecore