724 research outputs found

    Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

    Full text link
    Human-swarm interaction (HSI) involves a number of human factors impacting human behaviour throughout the interaction. As the technologies used within HSI advance, it is more tempting to increase the level of swarm autonomy within the interaction to reduce the workload on humans. Yet, the prospective negative effects of high levels of autonomy on human situational awareness can hinder this process. Flexible autonomy aims at trading-off these effects by changing the level of autonomy within the interaction when required; with mixed-initiatives combining human preferences and automation's recommendations to select an appropriate level of autonomy at a certain point of time. However, the effective implementation of mixed-initiative systems raises fundamental questions on how to combine human preferences and automation recommendations, how to realise the selected level of autonomy, and what the future impacts on the cognitive states of a human are. We explore open challenges that hamper the process of developing effective flexible autonomy. We then highlight the potential benefits of using system modelling techniques in HSI by illustrating how they provide HSI designers with an opportunity to evaluate different strategies for assessing the state of the mission and for adapting the level of autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling Conference, Canberra, Australi

    Effects of alarms on control of robot teams

    Get PDF
    Annunciator driven supervisory control (ADSC) is a widely used technique for directing human attention to control systems otherwise beyond their capabilities. ADSC requires associating abnormal parameter values with alarms in such a way that operator attention can be directed toward the involved subsystems or conditions. This is hard to achieve in multirobot control because it is difficult to distinguish abnormal conditions for states of a robot team. For largely independent tasks such as foraging, however, self-reflection can serve as a basis for alerting the operator to abnormalities of individual robots. While the search for targets remains unalarmed the resulting system approximates ADSC. The described experiment compares a control condition in which operators perform a multirobot urban search and rescue (USAR) task without alarms with ADSC (freely annunciated) and with a decision aid that limits operator workload by showing only the top alarm. No differences were found in area searched or victims found, however, operators in the freely annunciated condition were faster in detecting both the annunciated failures and victims entering their cameras' fields of view. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved

    Effects of spatial ability on multi-robot control tasks

    Get PDF
    Working with large teams of robots is a very complex and demanding task for any operator and individual differences in spatial ability could significantly affect that performance. In the present study, we examine data from two earlier experiments to investigate the effects of ability for perspective-taking on performance at an urban search and rescue (USAR) task using a realistic simulation and alternate displays. We evaluated the participants' spatial ability using a standard measure of spatial orientation and examined the divergence of performance in accuracy and speed in locating victims, and perceived workload. Our findings show operators with higher spatial ability experienced less workload and marked victims more precisely. An interaction was found for the experimental image queue display for which participants with low spatial ability improved significantly in their accuracy in marking victims over the traditional streaming video display. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved

    Autonomous Capabilities for Small Unmanned Aerial Systems Conducting Radiological Response: Findings from a High-fidelity Discovery Experiment

    Get PDF
    This article presents a preliminary work domain theory and identifies autonomous vehicle, navigational, and mission capabilities and challenges for small unmanned aerial systems (SUASs) responding to a radiological disaster. Radiological events are representative of applications that involve flying at low altitudes and close proximities to structures. To more formally understand the guidance and control demands, the environment in which the SUAS has to function, and the expected missions, tasks, and strategies to respond to an incident, a discovery experiment was performed in 2013. The experiment placed a radiological source emitting at 10 times background radiation in the simulated collapse of a multistory hospital. Two SUASs, an AirRobot 100B and a Leptron Avenger, were inserted with subject matter experts into the response, providing high operational fidelity. The SUASs were expected by the responders to fly at altitudes between 0.3 and 30 m, and hover at 1.5 m from urban structures. The proximity to a building introduced a decrease in GPS satellite coverage, challenging existing vehicle autonomy. Five new navigational capabilities were identified: scan, obstacle avoidance, contour following, environment-aware return to home, andreturn to highest reading. Furthermore, the data-to-decision process could be improved with autonomous data digestion and visualization capabilities. This article is expected to contribute to a better understanding of autonomy in a SUAS, serve as a requirement document for advanced autonomy, and illustrate how discovery experimentation serves as a design tool for autonomous vehicles

    Human-Robot Team Performance Compared to Full Robot Autonomy in 16 Real-World Search and Rescue Missions: Adaptation of the DARPA Subterranean Challenge

    Full text link
    Human operators in human-robot teams are commonly perceived to be critical for mission success. To explore the direct and perceived impact of operator input on task success and team performance, 16 real-world missions (10 hrs) were conducted based on the DARPA Subterranean Challenge. These missions were to deploy a heterogeneous team of robots for a search task to locate and identify artifacts such as climbing rope, drills and mannequins representing human survivors. Two conditions were evaluated: human operators that could control the robot team with state-of-the-art autonomy (Human-Robot Team) compared to autonomous missions without human operator input (Robot-Autonomy). Human-Robot Teams were often in directed autonomy mode (70% of mission time), found more items, traversed more distance, covered more unique ground, and had a higher time between safety-related events. Human-Robot Teams were faster at finding the first artifact, but slower to respond to information from the robot team. In routine conditions, scores were comparable for artifacts, distance, and coverage. Reasons for intervention included creating waypoints to prioritise high-yield areas, and to navigate through error-prone spaces. After observing robot autonomy, operators reported increases in robot competency and trust, but that robot behaviour was not always transparent and understandable, even after high mission performance.Comment: Submitted to Transactions on Human-Robot Interactio
    corecore