1,296 research outputs found

    Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

    Full text link
    Human-swarm interaction (HSI) involves a number of human factors impacting human behaviour throughout the interaction. As the technologies used within HSI advance, it is more tempting to increase the level of swarm autonomy within the interaction to reduce the workload on humans. Yet, the prospective negative effects of high levels of autonomy on human situational awareness can hinder this process. Flexible autonomy aims at trading-off these effects by changing the level of autonomy within the interaction when required; with mixed-initiatives combining human preferences and automation's recommendations to select an appropriate level of autonomy at a certain point of time. However, the effective implementation of mixed-initiative systems raises fundamental questions on how to combine human preferences and automation recommendations, how to realise the selected level of autonomy, and what the future impacts on the cognitive states of a human are. We explore open challenges that hamper the process of developing effective flexible autonomy. We then highlight the potential benefits of using system modelling techniques in HSI by illustrating how they provide HSI designers with an opportunity to evaluate different strategies for assessing the state of the mission and for adapting the level of autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling Conference, Canberra, Australi

    The Role of Human-Automation Consensus in Multiple Unmanned Vehicle Scheduling

    Get PDF
    Objective: This study examined the impact of increasing automation replanning rates on operator performance and workload when supervising a decentralized network of heterogeneous unmanned vehicles. Background: Futuristic unmanned vehicles systems will invert the operator-to-vehicle ratio so that one operator can control multiple dissimilar vehicles connected through a decentralized network. Significant human-automation collaboration will be needed because of automation brittleness, but such collaboration could cause high workload. Method: Three increasing levels of replanning were tested on an existing multiple unmanned vehicle simulation environment that leverages decentralized algorithms for vehicle routing and task allocation in conjunction with human supervision. Results: Rapid replanning can cause high operator workload, ultimately resulting in poorer overall system performance. Poor performance was associated with a lack of operator consensus for when to accept the automation’s suggested prompts for new plan consideration as well as negative attitudes toward unmanned aerial vehicles in general. Participants with video game experience tended to collaborate more with the automation, which resulted in better performance. Conclusion: In decentralized unmanned vehicle networks, operators who ignore the automation’s requests for new plan consideration and impose rapid replans both increase their own workload and reduce the ability of the vehicle network to operate at its maximum capacity. Application: These findings have implications for personnel selection and training for futuristic systems involving human collaboration with decentralized algorithms embedded in networks of autonomous systems.Aurora Flight Sciences Corp.United States. Office of Naval Researc

    The Underpinnings of Workload in Unmanned Vehicle Systems

    Get PDF
    This paper identifies and characterizes factors that contribute to operator workload in unmanned vehicle systems. Our objective is to provide a basis for developing models of workload for use in design and operation of complex human-machine systems. In 1986, Hart developed a foundational conceptual model of workload, which formed the basis for arguably the most widely used workload measurement techniquethe NASA Task Load Index. Since that time, however, there have been many advances in models and factor identification as well as workload control measures. Additionally, there is a need to further inventory and describe factors that contribute to human workload in light of technological advances, including automation and autonomy. Thus, we propose a conceptual framework for the workload construct and present a taxonomy of factors that can contribute to operator workload. These factors, referred to as workload drivers, are associated with a variety of system elements including the environment, task, equipment and operator. In addition, we discuss how workload moderators, such as automation and interface design, can be manipulated in order to influence operator workload. We contend that workload drivers, workload moderators, and the interactions among drivers and moderators all need to be accounted for when building complex, human-machine systems

    Boredom and Distraction in Multiple Unmanned Vehicle Supervisory Control

    Get PDF
    Operators currently controlling Unmanned Aerial Vehicles report significant boredom, and such systems will likely become more automated in the future. Similar problems are found in process control, commercial aviation, and medical settings. To examine the effect of boredom in such settings, a long duration low task load experiment was conducted. Three low task load levels requiring operator input every 10, 20, or 30 minutes were tested in a our-hour study using a multiple unmanned vehicle simulation environment that leverages decentralized algorithms for sometimes imperfect vehicle scheduling. Reaction times to system-generated events generally decreased across the four hours, as did participants’ ability to maintain directed attention. Overall, participants spent almost half of the time in a distracted state. The top performer spent the majority of time in directed and divided attention states. Unexpectedly, the second-best participant, only 1% worse than the top performer, was distracted almost one third of the experiment, but exhibited a periodic switching strategy, allowing him to pay just enough attention to assist the automation when needed. Indeed, four of the five top performers were distracted more than one-third of the time. These findings suggest that distraction due to boring, low task load environments can be effectively managed through efficient attention switching. Future work is needed to determine optimal frequency and duration of attention state switches given various exogenous attributes, as well as individual variability. These findings have implications for the design of and personnel selection for supervisory control systems where operators monitor highly automated systems for long durations with only occasional or rare input.This work was supported by Aurora Flight Sciences under the ONR Science of Autonomy program as well as the Office of Naval Research (ONR) under Code 34 and MURI [grant number N00014-08-C-070]

    Assessing Operator Strategies for Real-time Replanning of Multiple Unmanned Vehicles

    Get PDF
    Future unmanned vehicles systems will invert the operator-to-vehicle ratio so that one operator controls a decentralized network of heterogeneous unmanned vehicles. This study examines the impact of allowing an operator to adjust the rate of prompts to view automation-generated plans on system performance and operator workload. Results showed that the majority of operators chose to adjust the replan prompting rate. The initial replan prompting rate had a significant framing effect on the replan prompting rates chosen throughout a scenario. Higher initial replan prompting rates led to significantly lower system performance. Operators successfully self-regulated their task-switching behavior to moderate their workload.This research is funded by the Office of Naval Research (ONR) and Aurora Flight Sciences

    The Impact of Heterogeneity on Operator Performance in Future Unmanned Vehicle Systems

    Get PDF
    Recent studies have shown that with appropriate operator decision support and with sufficient automation, inverting the multiple operators to single-unmanned vehicle control paradigm is possible. These studies, however, have generally focused on homogeneous teams of vehicles, and have not completely addressed either the manifestation of heterogeneity in vehicle teams, or the effects of heterogeneity on operator capacity. An important implication of heterogeneity in unmanned vehicle teams is an increase in the diversity of possible team configurations available for each operator, as well as an increase in the diversity of possible attention allocation schemes that can be utilized by operators. To this end, this paper introduces a discrete event simulation (DES) model as a means to model a single operator supervising multiple heterogeneous unmanned vehicles. The DES model can be used to understand the impact of varying both vehicle team design variables (such as team composition) and operator design variables (including attention allocation strategies). The model also highlights the sub-components of operator attention allocation schemes that can impact overall performance when supervising heterogeneous unmanned vehicle teams. Results from an experimental case study are then used to validate the model, and make predictions about operator performance for various heterogeneous team configurations.The research was supported by Charles River Analytics, the Office of Naval Research (ONR), and MIT Lincoln Laboratory

    Modeling multiple human operators in the supervisory control of heterogeneous unmanned vehicles

    Get PDF
    In the near future, large, complex, time-critical missions, such as disaster relief, will likely require multiple unmanned vehicle (UV) operators, each controlling multiple vehicles, to combine their efforts as a team. However, is the effort of the team equal to the sum of the operator's individual efforts? To help answer this question, a discrete event simulation model of a team of human operators, each performing supervisory control of multiple unmanned vehicles, was developed. The model consists of exogenous and internal inputs, operator servers, and a task allocation mechanism that disseminates events to the operators according to the team structure and state of the system. To generate the data necessary for model building and validation, an experimental test-bed was developed where teams of three operators controlled multiple UVs by using a simulated ground control station software interface. The team structure and interarrival time of exogenous events were both varied in a 2Ă—2 full factorial design to gather data on the impact on system performance that occurs as a result of changing both exogenous and internal inputs. From the data that was gathered, the model was able to replicate the empirical results within a 95% confidence interval for all four treatments, however more empirical data is needed to build confidence in the model's predictive ability.United States. Office of Naval ResearchUnited States. Air Force Office of Scientific Researc

    Modeling the Impact of Operator Trust on Performance in Multiple Robot Control,

    Get PDF
    We developed a system dynamics model to simulate the impact of operator trust on performance in multiple robot control. Analysis of a simulated urban search and rescue experiment showed that operators decided to manually control the robots when they lost trust in the autonomous planner that was directing the robots. Operators who rarely used manual control performed the worst. However, the operators who most frequently used manual control reported higher workload and did not perform any better than operators with moderate manual control usage. Based on these findings, we implemented a model where trust and performance form a feedback loop, in which operators perceive the performance of the system, calibrate their trust, and adjust their control of the robots. A second feedback loop incorporates the impact of trust on cognitive workload and system performance. The model was able to replicate the quantitative performance of three groups of operators within 2.3%. This model could help us gain a greater understanding of how operators build and lose trust in automation and the impact of those changes in trust on performance and workload, which is crucial to the development of future systems involving humanautomation collaboration.This research is sponsored by the Office of Naval Research and the Air Force Office of Scientific Research

    The Effect of Task Load, Automation Reliability, and Environment Complexity on UAV Supervisory Control Performance

    Get PDF
    Over the last decade, military unmanned aerial vehicles (UAVs) have experienced exponential growth and now comprise over 40% of military aircraft. However, since most military UAVs require multiple operators (usually an air vehicle operator, payload operator, and mission commander), the proliferation of UAVs has created a manpower burden within the U.S. military. Fortunately, simultaneous advances in UAV automation have enabled a switch from direct control to supervisory control; future UAV operators will no longer directly control a single UAV subsystem but, rather, will control multiple advanced, highly autonomous UAVs. However, research is needed to better understand operator performance in a complex UAV supervisory control environment. The Naval Research Lab (NRL) developed SCOUT™ (Supervisory Control Operations User Testbed) to realistically simulate the supervisory control tasks that a future UAV operator will likely perform in a dynamic, uncertain setting under highly variable time constraints. The study reported herein used SCOUT to assess the effects of task load, environment complexity, and automation reliability on UAV operator performance and automation dependence. The effects of automation reliability on participants’ subjective trust ratings and the possible dissociation between task load and subjective workload ratings were also explored. Eighty-one Navy student pilots completed a 34:15 minute pre-scripted SCOUT scenario, during which they managed three helicopter UAVs. To meet mission goals, they decided how to best allocate the UAVs to locate targets while they maintained communications, updated UAV parameters, and monitored their sensor feeds and airspace. After completing training on SCOUT, participants were randomly sorted into low and high automation reliability groups. Within each group, task load (the number of messages and vehicle status updates that had to be made and the number of new targets that appeared) and environment complexity (the complexity of the payload monitoring task) were varied between low and high levels over the course of the scenario. Participants’ throughput, accuracy, and expected value in response to mission events were used to assess their performance. In addition, participants rated their subjective workload and fatigue using the Crew Status Survey. Finally, a four-item survey modeled after Lee and Moray’s validated (1994) scale was used to assess participants’ trust in the payload task automation and their self-confidence that they could have manually performed the payload task. This study contributed to the growing body of knowledge on operator performance within a UAV supervisory control setting. More specifically, it provided experimental evidence of the relationship between operator task load, task complexity, and automation reliability and their effects on operator performance, automation dependence, and operators’ subjective experiences of workload and fatigue. It also explored the relationship between automation reliability and operators’ subjective trust in said automation. The immediate goal of this research effort is to contribute to the development of a suite of domain-specific performance metrics to enable the development and/or testing and evaluation of future UAV ground control stations (GCS), particularly new work support tools and data visualizations. Long-term goals also include the potential augmentation of the current Aviation Selection Test Battery (ASTB) to better select future UAV operators and operational use of the metrics to determine mission-specific manpower requirements. In the far future, UAV-specific performance metrics could also contribute to the development of a dynamic task allocation algorithm for distributing control of UAVs amongst a group of operators

    Modeling the Impact of Operator Trust on Performance in Multiple Robot Control

    Get PDF
    We developed a system dynamics model to simulate the impact of operator trust on performance in multiple robot control. Analysis of a simulated urban search and rescue experiment showed that operators decided to manually control the robots when they lost trust in the autonomous planner that was directing the robots. Operators who rarely used manual control performed the worst. However, the operators who most frequently used manual control reported higher workload and did not perform any better than operators with moderate manual control usage. Based on these findings, we implemented a model where trust and performance form a feedback loop, in which operators perceive the performance of the system, calibrate their trust, and adjust their control of the robots. A second feedback loop incorporates the impact of trust on cognitive workload and system performance. The model was able to replicate the quantitative performance of three groups of operators within 2.3%. This model could help us gain a greater understanding of how operators build and lose trust in automation and the impact of those changes in trust on performance and workload, which is crucial to the development of future systems involving human-automation collaboration
    • …
    corecore