134 research outputs found

    Effects of alarms on control of robot teams

    Get PDF
    Annunciator driven supervisory control (ADSC) is a widely used technique for directing human attention to control systems otherwise beyond their capabilities. ADSC requires associating abnormal parameter values with alarms in such a way that operator attention can be directed toward the involved subsystems or conditions. This is hard to achieve in multirobot control because it is difficult to distinguish abnormal conditions for states of a robot team. For largely independent tasks such as foraging, however, self-reflection can serve as a basis for alerting the operator to abnormalities of individual robots. While the search for targets remains unalarmed the resulting system approximates ADSC. The described experiment compares a control condition in which operators perform a multirobot urban search and rescue (USAR) task without alarms with ADSC (freely annunciated) and with a decision aid that limits operator workload by showing only the top alarm. No differences were found in area searched or victims found, however, operators in the freely annunciated condition were faster in detecting both the annunciated failures and victims entering their cameras' fields of view. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved

    Human control strategies for multi-robot teams

    Get PDF
    Expanding human span of control over teams of robots presents an obstacle to the wider deployment of robots for practical tasks in a variety of areas. One difficulty is that many different types of human interactions may be necessary to maintain and control a robot team. We have developed a taxonomy of human-robot tasks based on complexity of control that helps explicate the forms of control likely to be needed and the demands they pose to human operators. In this paper we use research from two of these areas to illustrate our taxonomy and its utility in characterizing and improving human-robot interaction

    Task switching and cognitively compatible guidance for control of multiple robots

    Get PDF
    Decision aiding sometimes fails not because following guidance would not improve performance but because humans have difficulty in following guidance as it is presented to them. This paper presents a new analysis of data from multi-robot control experiments in which guidance in a demonstrably superior robot selection strategy failed to produce improvement in performance. We had earlier suggested that the failure to benefit might be related to loss of volition in switching between robots being controlled. In this paper we present new data indicating that spatial, and hence cognitive proximity, of robots may play a role in making volitional switches more effective. Foraging tasks, such as search and rescue or reconnaissance, in which UVs are either relatively sparse and unlikely to interfere with one another or employ automated path planning, form a broad class of applications in which multiple robots can be controlled sequentially in a round-robin fashion. Such human-robot systems can be described as a queuing system in which the human acts as a server while robots presenting requests for service are the jobs. The possibility of improving system performance through well- known scheduling techniques is an immediate consequence. Two experiments investigating scheduling interventions are described. The first compared a system in which all anomalous robots were alarmed (Alarm), one in which alarms were presented singly in the order in which they arrived (FIFO) and a Control condition without alarms. The second experiment employed failures of varying difficulty supporting an optimal shortest job first (SJF) policy. SJF, FIFO, and Alarm conditions were compared. In both experiments performance in directed attention conditions was poorer than predicted. This paper presents new data comparing the spatial proximity in switches between robots selected by the operator (Alarm conditions) and those dictated by the system (FIFO and SJF conditions)

    Task Switching and Single vs. Multiple Alarms for Supervisory Control of Multiple Robots

    Get PDF
    Foraging tasks, such as search and rescue or reconnaissance, in which UVs are either relatively sparse and unlikely to interfere with one another or employ automated path planning, form a broad class of applications in which multiple robots can be controlled sequen-tially in a round-robin fashion. Such human-robot systems can be described as a queuing sys-tem in which the human acts as a server while robots presenting requests for service are the jobs. The possibility of improving system performance through well-known scheduling tech-niques is an immediate consequence. Unfortunately, real human-multirobot systems are more complex often requiring operator monitoring and other ancillary tasks. Improving perfor-mance through scheduling (jobs) under these conditions requires minimizing the effort ex-pended monitoring and directing the operator’s attention to the robot offering the most gain. Two experiments investigating scheduling interventions are described. The first compared a system in which all anomalous robots were alarmed (Open-queue), one in which alarms were presented singly in the order in which they arrived (FIFO) and a Control condition without alarms. The second experiment employed failures of varying difficulty supporting an optimal shortest job first (SJF) policy. SJF, FIFO, and Open-queue conditions were compared. In both experiments performance in directed attention conditions was poorer than predicted. A possi-ble explanation based on effects of volition in task switching is propose

    Attention Allocation for Human Multi-Robot Control: Cognitive Analysis based on Behavior Data and Hidden States

    Get PDF
    Human multi-robot interaction exploits both the human operator’s high-level decision-making skills and the robotic agents’ vigorous computing and motion abilities. While controlling multi-robot teams, an operator’s attention must constantly shift between individual robots to maintain sufficient situation awareness. To conserve an operator’s attentional resources, a robot with self reflect capability on its abnormal status can help an operator focus her attention on emergent tasks rather than unneeded routine checks. With the proposing self-reflect aids, the human-robot interaction becomes a queuing framework, where the robots act as the clients to request for interaction and an operator acts as the server to respond these job requests. This paper examined two types of queuing schemes, the self-paced Open-queue identifying all robots’ normal/abnormal conditions, whereas the forced-paced shortest-job-first (SJF) queue showing a single robot’s request at one time by following the SJF approach. As a robot may miscarry its experienced failures in various situations, the effects of imperfect automation were also investigated in this paper. The results suggest that the SJF attentional scheduling approach can provide stable performance in both primary (locate potential targets) and secondary (resolve robots’ failures) tasks, regardless of the system’s reliability levels. However, the conventional results (e.g., number of targets marked) only present little information about users’ underlying cognitive strategies and may fail to reflect the user’s true intent. As understanding users’ intentions is critical to providing appropriate cognitive aids to enhance task performance, a Hidden Markov Model (HMM) is used to examine operators’ underlying cognitive intent and identify the unobservable cognitive states. The HMM results demonstrate fundamental differences among the queuing mechanisms and reliability conditions. The findings suggest that HMM can be helpful in investigating the use of human cognitive resources under multitasking environments

    Neglect Benevolence in Human-Swarm Interaction with Communication Latency

    Get PDF
    In practical applications of robot swarms with bio-inspired behaviors, a human operator will need to exert control over the swarm to fulfill the mission objectives. In many operational settings, human operators are remotely located and the communication environment is harsh. Hence, there exists some latency in information (or control command) transfer between the human and the swarm. In this paper, we conduct experiments of human-swarm interaction to investigate the effects of communication latency on the performance of a human-swarm system in a swarm foraging task. We develop and investigate the concept of neglect benevolence, where a human operator allows the swarm to evolve on its own and stabilize before giving new commands. Our experimental results indicate that operators exploited neglect benevolence in different ways to develop successful strategies in the foraging task. Furthermore, we show experimentally that the use of a predictive display can help mitigate the adverse effects of communication latency

    On Provably Safe and Live Multirobot Coordination With Online Goal Posting

    Get PDF
    A standing challenge in multirobot systems is to realize safe and efficient motion planning and coordination methods that are capable of accounting for uncertainties and contingencies. The challenge is rendered harder by the fact that robots may be heterogeneous and that their plans may be posted asynchronously. Most existing approaches require constraints on the infrastructure or unrealistic assumptions on robot models. In this article, we propose a centralized, loosely-coupled supervisory controller that overcomes these limitations. The approach responds to newly posed constraints and uncertainties during trajectory execution, ensuring at all times that planned robot trajectories remain kinodynamically feasible, that the fleet is in a safe state, and that there are no deadlocks or livelocks. This is achieved without the need for hand-coded rules, fixed robot priorities, or environment modification. We formally state all relevant properties of robot behavior in the most general terms possible, without assuming particular robot models or environments, and provide both formal and empirical proof that the proposed fleet control algorithms guarantee safety and liveness

    Teams organization and performance analysis in autonomous human-robot teams

    Get PDF
    This paper proposes a theory of human control of robot teams based on considering how people coordinate across different task allocations. Our current work focuses on domains such as foraging in which robots perform largely independent tasks. The present study addresses the interaction between automation and organization of human teams in controlling large robot teams performing an Urban Search and Rescue (USAR) task. We identify three subtasks: perceptual search-visual search for victims, assistance-teleoperation to assist robot, and navigation-path planning and coordination. For the studies reported here, navigation was selected for automation because it involves weak dependencies among robots making it more complex and because it was shown in an earlier experiment to be the most difficult. This paper reports an extended analysis of the two conditions from a larger four condition study. In these two "shared pool" conditions Twenty four simulated robots were controlled by teams of 2 participants. Sixty paid participants (30 teams) were recruited to perform the shared pool tasks in which participants shared control of the 24 UGVs and viewed the same screens. Groups in the manual control condition issued waypoints to navigate their robots. In the autonomy condition robots generated their own waypoints using distributed path planning. We identify three self-organizing team strategies in the shared pool condition: joint control operators share full authority over robots, mixed control in which one operator takes primary control while the other acts as an assistant, and split control in which operators divide the robots with each controlling a sub-team. Automating path planning improved system performance. Effects of team organization favored operator teams who shared authority for the pool of robots. © 2010 ACM

    Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

    Full text link
    Human-swarm interaction (HSI) involves a number of human factors impacting human behaviour throughout the interaction. As the technologies used within HSI advance, it is more tempting to increase the level of swarm autonomy within the interaction to reduce the workload on humans. Yet, the prospective negative effects of high levels of autonomy on human situational awareness can hinder this process. Flexible autonomy aims at trading-off these effects by changing the level of autonomy within the interaction when required; with mixed-initiatives combining human preferences and automation's recommendations to select an appropriate level of autonomy at a certain point of time. However, the effective implementation of mixed-initiative systems raises fundamental questions on how to combine human preferences and automation recommendations, how to realise the selected level of autonomy, and what the future impacts on the cognitive states of a human are. We explore open challenges that hamper the process of developing effective flexible autonomy. We then highlight the potential benefits of using system modelling techniques in HSI by illustrating how they provide HSI designers with an opportunity to evaluate different strategies for assessing the state of the mission and for adapting the level of autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling Conference, Canberra, Australi
    • …
    corecore