3,892 research outputs found

    Effects of alarms on control of robot teams

    Get PDF
    Annunciator driven supervisory control (ADSC) is a widely used technique for directing human attention to control systems otherwise beyond their capabilities. ADSC requires associating abnormal parameter values with alarms in such a way that operator attention can be directed toward the involved subsystems or conditions. This is hard to achieve in multirobot control because it is difficult to distinguish abnormal conditions for states of a robot team. For largely independent tasks such as foraging, however, self-reflection can serve as a basis for alerting the operator to abnormalities of individual robots. While the search for targets remains unalarmed the resulting system approximates ADSC. The described experiment compares a control condition in which operators perform a multirobot urban search and rescue (USAR) task without alarms with ADSC (freely annunciated) and with a decision aid that limits operator workload by showing only the top alarm. No differences were found in area searched or victims found, however, operators in the freely annunciated condition were faster in detecting both the annunciated failures and victims entering their cameras' fields of view. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved

    Teams organization and performance analysis in autonomous human-robot teams

    Get PDF
    This paper proposes a theory of human control of robot teams based on considering how people coordinate across different task allocations. Our current work focuses on domains such as foraging in which robots perform largely independent tasks. The present study addresses the interaction between automation and organization of human teams in controlling large robot teams performing an Urban Search and Rescue (USAR) task. We identify three subtasks: perceptual search-visual search for victims, assistance-teleoperation to assist robot, and navigation-path planning and coordination. For the studies reported here, navigation was selected for automation because it involves weak dependencies among robots making it more complex and because it was shown in an earlier experiment to be the most difficult. This paper reports an extended analysis of the two conditions from a larger four condition study. In these two "shared pool" conditions Twenty four simulated robots were controlled by teams of 2 participants. Sixty paid participants (30 teams) were recruited to perform the shared pool tasks in which participants shared control of the 24 UGVs and viewed the same screens. Groups in the manual control condition issued waypoints to navigate their robots. In the autonomy condition robots generated their own waypoints using distributed path planning. We identify three self-organizing team strategies in the shared pool condition: joint control operators share full authority over robots, mixed control in which one operator takes primary control while the other acts as an assistant, and split control in which operators divide the robots with each controlling a sub-team. Automating path planning improved system performance. Effects of team organization favored operator teams who shared authority for the pool of robots. © 2010 ACM

    A Predictive Model for Human-Unmanned Vehicle Systems : Final Report

    Get PDF
    Advances in automation are making it possible for a single operator to control multiple unmanned vehicles (UVs). This capability is desirable in order to reduce the operational costs of human-UV systems (HUVS), extend human capabilities, and improve system effectiveness. However, the high complexity of these systems introduces many significant challenges to system designers. To help understand and overcome these challenges, high-fidelity computational models of the HUVS must be developed. These models should have two capabilities. First, they must be able to describe the behavior of the various entities in the team, including both the human operator and the UVs in the team. Second, these models must have the ability to predict how changes in the HUVS and its mission will alter the performance characteristics of the system. In this report, we describe our work toward developing such a model. Via user studies, we show that our model has the ability to describe the behavior of a HUVS consisting of a single human operator and multiple independent UVs with homogeneous capabilities. We also evaluate the model’s ability to predict how changes in the team size, the human-UV interface, the UV’s autonomy levels, and operator strategies affect the system’s performance.Prepared for MIT Lincoln Laborator

    Identifying Predictive Metrics for Supervisory Control of Multiple Robots

    Get PDF
    In recent years, much research has focused on making possible single operator control of multiple robots. In these high workload situations, many questions arise including how many robots should be in the team, which autonomy levels should they employ, and when should these autonomy levels change? To answer these questions, sets of metric classes should be identified that capture these aspects of the human-robot team. Such a set of metric classes should have three properties. First, it should contain the key performance parameters of the system. Second, it should identify the limitations of the agents in the system. Third, it should have predictive power. In this paper, we decompose a human-robot team consisting of a single human and multiple robots in an effort to identify such a set of metric classes. We assess the ability of this set of metric classes to (a) predict the number of robots that should be in the team and (b) predict system effectiveness. We do so by comparing predictions with actual data from a user study, which is also described.This research was funded by MIT Lincoln Laboratory

    Selecting Metrics to Evaluate Human Supervisory Control Applications

    Get PDF
    The goal of this research is to develop a methodology to select supervisory control metrics. This methodology is based on cost-benefit analyses and generic metric classes. In the context of this research, a metric class is defined as the set of metrics that quantify a certain aspect or component of a system. Generic metric classes are developed because metrics are mission-specific, but metric classes are generalizable across different missions. Cost-benefit analyses are utilized because each metric set has advantages, limitations, and costs, thus the added value of different sets for a given context can be calculated to select the set that maximizes value and minimizes costs. This report summarizes the findings of the first part of this research effort that has focused on developing a supervisory control metric taxonomy that defines generic metric classes and categorizes existing metrics. Future research will focus on applying cost benefit analysis methodologies to metric selection. Five main metric classes have been identified that apply to supervisory control teams composed of humans and autonomous platforms: mission effectiveness, autonomous platform behavior efficiency, human behavior efficiency, human behavior precursors, and collaborative metrics. Mission effectiveness measures how well the mission goals are achieved. Autonomous platform and human behavior efficiency measure the actions and decisions made by the humans and the automation that compose the team. Human behavior precursors measure human initial state, including certain attitudes and cognitive constructs that can be the cause of and drive a given behavior. Collaborative metrics address three different aspects of collaboration: collaboration between the human and the autonomous platform he is controlling, collaboration among humans that compose the team, and autonomous collaboration among platforms. These five metric classes have been populated with metrics and measuring techniques from the existing literature. Which specific metrics should be used to evaluate a system will depend on many factors, but as a rule-of-thumb, we propose that at a minimum, one metric from each class should be used to provide a multi-dimensional assessment of the human-automation team. To determine what the impact on our research has been by not following such a principled approach, we evaluated recent large-scale supervisory control experiments conducted in the MIT Humans and Automation Laboratory. The results show that prior to adapting this metric classification approach, we were fairly consistent in measuring mission effectiveness and human behavior through such metrics as reaction times and decision accuracies. However, despite our supervisory control focus, we were remiss in gathering attention allocation metrics and collaboration metrics, and we often gathered too many correlated metrics that were redundant and wasteful. This meta-analysis of our experimental shortcomings reflect those in the general research population in that we tended to gravitate to popular metrics that are relatively easy to gather, without a clear understanding of exactly what aspect of the systems we were measuring and how the various metrics informed an overall research question

    A Data-driven Approach Towards Human-robot Collaborative Problem Solving in a Shared Space

    Full text link
    We are developing a system for human-robot communication that enables people to communicate with robots in a natural way and is focused on solving problems in a shared space. Our strategy for developing this system is fundamentally data-driven: we use data from multiple input sources and train key components with various machine learning techniques. We developed a web application that is collecting data on how two humans communicate to accomplish a task, as well as a mobile laboratory that is instrumented to collect data on how two humans communicate to accomplish a task in a physically shared space. The data from these systems will be used to train and fine-tune the second stage of our system, in which the robot will be simulated through software. A physical robot will be used in the final stage of our project. We describe these instruments, a test-suite and performance metrics designed to evaluate and automate the data gathering process as well as evaluate an initial data set.Comment: 2017 AAAI Fall Symposium on Natural Communication for Human-Robot Collaboratio

    Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

    Full text link
    Human-swarm interaction (HSI) involves a number of human factors impacting human behaviour throughout the interaction. As the technologies used within HSI advance, it is more tempting to increase the level of swarm autonomy within the interaction to reduce the workload on humans. Yet, the prospective negative effects of high levels of autonomy on human situational awareness can hinder this process. Flexible autonomy aims at trading-off these effects by changing the level of autonomy within the interaction when required; with mixed-initiatives combining human preferences and automation's recommendations to select an appropriate level of autonomy at a certain point of time. However, the effective implementation of mixed-initiative systems raises fundamental questions on how to combine human preferences and automation recommendations, how to realise the selected level of autonomy, and what the future impacts on the cognitive states of a human are. We explore open challenges that hamper the process of developing effective flexible autonomy. We then highlight the potential benefits of using system modelling techniques in HSI by illustrating how they provide HSI designers with an opportunity to evaluate different strategies for assessing the state of the mission and for adapting the level of autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling Conference, Canberra, Australi

    Supervisory Autonomous Control of Homogeneous Teams of Unmanned Ground Vehicles, with Application to the Multi-Autonomous Ground-Robotic International Challenge

    Get PDF
    There are many different proposed methods for Supervisory Control of semi-autonomous robots. There have also been numerous software simulations to determine how many robots can be successfully supervised by a single operator, a problem known as fan-out, but only a few studies have been conducted using actual robots. As evidenced by the MAGIC 2010 competition, there is increasing interest in amplifying human capacity by allowing one or a few operators to supervise a team of robotic agents. This interest provides motivation to perform a more in-depth evaluation of many autonomous/semiautonomous robots an operator can successfully supervise. The MAGIC competition allowed two human operators to supervise a team of robots in a complex search-and mapping operation. The MAGIC competition provided the best opportunity to date to study through practice the actual fan-out with multiple semi-autonomous robots. The current research provides a step forward in determining fan-out by offering an initial framework for testing multi-robot teams under supervisory control. One conclusion of this research is that the proposed framework is not complex or complete enough to provide conclusive data for determining fan-out. Initial testing using operators with limited training suggests that there is no obvious pattern to the operator interaction time with robots based on the number of robots and the complexity of the tasks. The initial hypothesis that, for a given task and robot there exists an optimal robot-to-operator efficiency ratio, could not be confirmed. Rather, the data suggests that the ability of the operator is a dominant factor in studies involving operators with limited training supervising small teams of robots. It is possible that, with more extensive training, operator times would become more closely related to the number of agents and the complexity of the tasks. The work described in this thesis proves an experimental framework and a preliminary data set for other researchers to critique and build upon. As the demand increases for agent-to-operator ratios greater than one, the need to expand upon research in this area will continue to grow

    Attention Allocation for Human Multi-Robot Control: Cognitive Analysis based on Behavior Data and Hidden States

    Get PDF
    Human multi-robot interaction exploits both the human operator’s high-level decision-making skills and the robotic agents’ vigorous computing and motion abilities. While controlling multi-robot teams, an operator’s attention must constantly shift between individual robots to maintain sufficient situation awareness. To conserve an operator’s attentional resources, a robot with self reflect capability on its abnormal status can help an operator focus her attention on emergent tasks rather than unneeded routine checks. With the proposing self-reflect aids, the human-robot interaction becomes a queuing framework, where the robots act as the clients to request for interaction and an operator acts as the server to respond these job requests. This paper examined two types of queuing schemes, the self-paced Open-queue identifying all robots’ normal/abnormal conditions, whereas the forced-paced shortest-job-first (SJF) queue showing a single robot’s request at one time by following the SJF approach. As a robot may miscarry its experienced failures in various situations, the effects of imperfect automation were also investigated in this paper. The results suggest that the SJF attentional scheduling approach can provide stable performance in both primary (locate potential targets) and secondary (resolve robots’ failures) tasks, regardless of the system’s reliability levels. However, the conventional results (e.g., number of targets marked) only present little information about users’ underlying cognitive strategies and may fail to reflect the user’s true intent. As understanding users’ intentions is critical to providing appropriate cognitive aids to enhance task performance, a Hidden Markov Model (HMM) is used to examine operators’ underlying cognitive intent and identify the unobservable cognitive states. The HMM results demonstrate fundamental differences among the queuing mechanisms and reliability conditions. The findings suggest that HMM can be helpful in investigating the use of human cognitive resources under multitasking environments
    • …
    corecore