292 research outputs found

    The Impact of Automation and Stress on Human Performance in UAV Operation

    Get PDF
    The United States Air Force (USAF) has increasing needs for unmanned aerial vehicle (UAV) operators. Automation may enable a single operator to manage multiple UAVs at the same time. Multi-UAV operation may require a unique set of skills and the need for new operators calls for targeting new populations for recruitment. The objective of this research is to develop a simulation environment for studying the role of individual differences in UAV operation under different task configurations and investigate predictors of performance and stress. Primarily, the study examined the impact of levels of automation (LOAs), as well as task demands, on task performance, stress and operator reliance on automation. Two intermediate LOAs were employed for two surveillance tasks included in the simulation of UAV operation. Task demand was manipulated via the high and low frequency of events associated with additional tasks included in the simulation. The task demand and LOA manipulations influenced task performance generally as expected. The task demand manipulations elicited higher subjective distress and workload. LOAs did not affect operator workload but affected reliance behavior. Also, this study examined the role of individual differences in simulated UAV operation. A variety of individual difference factors were associated with task performance and with subjective stress response. Video gaming experience was linked to lower distress and better performance, suggesting possible transfer of skills. Some gender differences were revealed in stress response, task performance, but all the gender effects became insignificant with gaming experience controlled. Generally, the effects of personality were consistent with previous studies, except some novel findings with the performance metrics. Additionally, task demand was found to moderate the influence of personality factors on stress response and performance metrics. Specifically, conscientiousness was associated with higher subjective engagement and performance when demands were higher. This study supports future research which aims to improve the dynamic interfaces in UAV operation, optimize operator reliance on automation, and identify individuals with the highest aptitude for multi-UAV control

    The Effect of Task Load, Automation Reliability, and Environment Complexity on UAV Supervisory Control Performance

    Get PDF
    Over the last decade, military unmanned aerial vehicles (UAVs) have experienced exponential growth and now comprise over 40% of military aircraft. However, since most military UAVs require multiple operators (usually an air vehicle operator, payload operator, and mission commander), the proliferation of UAVs has created a manpower burden within the U.S. military. Fortunately, simultaneous advances in UAV automation have enabled a switch from direct control to supervisory control; future UAV operators will no longer directly control a single UAV subsystem but, rather, will control multiple advanced, highly autonomous UAVs. However, research is needed to better understand operator performance in a complex UAV supervisory control environment. The Naval Research Lab (NRL) developed SCOUT™ (Supervisory Control Operations User Testbed) to realistically simulate the supervisory control tasks that a future UAV operator will likely perform in a dynamic, uncertain setting under highly variable time constraints. The study reported herein used SCOUT to assess the effects of task load, environment complexity, and automation reliability on UAV operator performance and automation dependence. The effects of automation reliability on participants’ subjective trust ratings and the possible dissociation between task load and subjective workload ratings were also explored. Eighty-one Navy student pilots completed a 34:15 minute pre-scripted SCOUT scenario, during which they managed three helicopter UAVs. To meet mission goals, they decided how to best allocate the UAVs to locate targets while they maintained communications, updated UAV parameters, and monitored their sensor feeds and airspace. After completing training on SCOUT, participants were randomly sorted into low and high automation reliability groups. Within each group, task load (the number of messages and vehicle status updates that had to be made and the number of new targets that appeared) and environment complexity (the complexity of the payload monitoring task) were varied between low and high levels over the course of the scenario. Participants’ throughput, accuracy, and expected value in response to mission events were used to assess their performance. In addition, participants rated their subjective workload and fatigue using the Crew Status Survey. Finally, a four-item survey modeled after Lee and Moray’s validated (1994) scale was used to assess participants’ trust in the payload task automation and their self-confidence that they could have manually performed the payload task. This study contributed to the growing body of knowledge on operator performance within a UAV supervisory control setting. More specifically, it provided experimental evidence of the relationship between operator task load, task complexity, and automation reliability and their effects on operator performance, automation dependence, and operators’ subjective experiences of workload and fatigue. It also explored the relationship between automation reliability and operators’ subjective trust in said automation. The immediate goal of this research effort is to contribute to the development of a suite of domain-specific performance metrics to enable the development and/or testing and evaluation of future UAV ground control stations (GCS), particularly new work support tools and data visualizations. Long-term goals also include the potential augmentation of the current Aviation Selection Test Battery (ASTB) to better select future UAV operators and operational use of the metrics to determine mission-specific manpower requirements. In the far future, UAV-specific performance metrics could also contribute to the development of a dynamic task allocation algorithm for distributing control of UAVs amongst a group of operators

    Workload-based Automated Interface Mode Selection

    Get PDF
    The increase in the size of the Air Force\u27s Unmanned Aerial Vehicle (UAV) fleet, and the desire to reduce operational manning requirements, has led to an interest in Multiple Aircraft Control (MAC) technology. The MAC concept is highly prone to operator overload, as it requires operators to maintain awareness for multiple aircraft. To attempt to mitigate the potential of operator overload, this research introduces an agent into the system interface to assume responsibility for managing automation mode selection. The agent uses a novel dynamic scheme for determining how and when to introduce automation assistance to the operator. By using a reinforcement learning approach, the interface agent is able to correlate an operator\u27s workload and performance levels. This allows the agent to determine the most appropriate times to introduce automation assistance. By automating tasks at appropriate times, the agent helps the system balance the operator\u27s workload level, striking the best possible balance between operator awareness and overall performance, while reducing the potential for operator overload

    Agent Transparency for Intelligent Target Identification in the Maritime Domain, and its impact on Operator Performance, Workload and Trust

    Get PDF
    This item is only available electronically.Objective: To examine how increasing the transparency of an intelligent maritime target identification system impacts on operator performance, workload and trust in the intelligent agent. Background: Previous research has shown that operator accuracy improves with increased transparency of an intelligent agent’s decisions and recommendations. This can be at the cost of increased workload and response time, although this has not been found by all studies. Prior studies have predominately focussed on route planning and navigation, and it is unclear if the benefits of agent transparency would apply to other tasks such as target identification. Method: Twenty seven participants were required to identify a number of tracks based on a set of identification criteria and the recommendation of an intelligent agent at three transparency levels in a repeated-measures design. The intelligent agent generated an identification recommendation for each track with different levels of transparency information displayed and participants were required to determine the identity of the track. For each transparency level, 70% of the recommendations made by the intelligent agent were correct, with incorrect recommendation due to additional information that the agent was not aware of, such as information from the ship’s radar. Participants’ identification accuracy and identification time were measured, and surveys on operator subjective workload and subjective trust in the intelligent agent were collected for each transparency level. Results: The results indicated that increased transparency information improved the operators’ sensitivity to the accuracy of the agent’s decisions and produced a greater tendency Agent Transparency for Intelligent Target Identification 33 to accept the agent’s decision. Increased agent transparency facilitated human-agent teaming without increasing workload or response time when correctly accepting the intelligent agent’s decision, but increased the response time when rejecting incorrect intelligent agent’s decisions. Participants also reported a higher level of trust when the intelligent agent was more transparent. Conclusion: This study shows the ability of agent transparency to improve performance without increasing workload. Greater agent transparency is also beneficial in building operator trust in the agent. Application: The current study can inform the design and use of uninhabited vehicles and intelligent agents in the maritime context for target identification. It also demonstrates that providing greater transparency of intelligent agents can improve human-agent teaming performance for a previously unstudied task and domain, and hence suggests broader applicability for the design of intelligent agents.Thesis (M.Psych(Organisational & Human Factors)) -- University of Adelaide, School of Psychology, 201

    Architecting Human Operator Trust in Automation to Improve System Effectiveness in Multiple Unmanned Aerial Vehicles (UAV)

    Get PDF
    Current Unmanned Aerial System (UAS) designs require multiple operators for each vehicle, partly due to imperfect automation matched with the complex operational environment. This study examines the effectiveness of future UAS automation by explicitly addressing the human/machine trust relationship during system architecting. A pedigreed engineering model of trust between human and machine was developed and applied to a laboratory-developed micro-UAS for Special Operations. This unprecedented investigation answered three primary questions. Can previous research be used to create a useful trust model for systems engineering? How can trust be considered explicitly within the DoD Architecture Framework? Can the utility of architecting trust be demonstrated on a given UAS architecture? By addressing operator trust explicitly during architecture development, system designers can incorporate more effective automation. The results provide the Systems Engineering community a new modeling technique for early human systems integration

    Boredom and Distraction in Multiple Unmanned Vehicle Supervisory Control

    Get PDF
    Operators currently controlling Unmanned Aerial Vehicles report significant boredom, and such systems will likely become more automated in the future. Similar problems are found in process control, commercial aviation, and medical settings. To examine the effect of boredom in such settings, a long duration low task load experiment was conducted. Three low task load levels requiring operator input every 10, 20, or 30 minutes were tested in a our-hour study using a multiple unmanned vehicle simulation environment that leverages decentralized algorithms for sometimes imperfect vehicle scheduling. Reaction times to system-generated events generally decreased across the four hours, as did participants’ ability to maintain directed attention. Overall, participants spent almost half of the time in a distracted state. The top performer spent the majority of time in directed and divided attention states. Unexpectedly, the second-best participant, only 1% worse than the top performer, was distracted almost one third of the experiment, but exhibited a periodic switching strategy, allowing him to pay just enough attention to assist the automation when needed. Indeed, four of the five top performers were distracted more than one-third of the time. These findings suggest that distraction due to boring, low task load environments can be effectively managed through efficient attention switching. Future work is needed to determine optimal frequency and duration of attention state switches given various exogenous attributes, as well as individual variability. These findings have implications for the design of and personnel selection for supervisory control systems where operators monitor highly automated systems for long durations with only occasional or rare input.This work was supported by Aurora Flight Sciences under the ONR Science of Autonomy program as well as the Office of Naval Research (ONR) under Code 34 and MURI [grant number N00014-08-C-070]

    A Realistic Simulation for Swarm UAVs and Performance Metrics for Operator User Interfaces

    Get PDF
    Robots have been utilized to support disaster mitigation missions through exploration of areas that are either unreachable or hazardous for human rescuers [1]. The great potential for robotics in disaster mitigation has been recognized by the research community and during the last decade, a lot of research has been focused on developing robotic systems for this purpose. In this thesis, we present a description of the usage and classification of UAVs and performance metrics that affect controlling of UAVs. We also present new contributions to the UAV simulator developed by ECSL and RRL: the integration of flight dynamics of Hummingbird quadcopter, and distance optimization using a Genetic algorithm

    Selecting Metrics to Evaluate Human Supervisory Control Applications

    Get PDF
    The goal of this research is to develop a methodology to select supervisory control metrics. This methodology is based on cost-benefit analyses and generic metric classes. In the context of this research, a metric class is defined as the set of metrics that quantify a certain aspect or component of a system. Generic metric classes are developed because metrics are mission-specific, but metric classes are generalizable across different missions. Cost-benefit analyses are utilized because each metric set has advantages, limitations, and costs, thus the added value of different sets for a given context can be calculated to select the set that maximizes value and minimizes costs. This report summarizes the findings of the first part of this research effort that has focused on developing a supervisory control metric taxonomy that defines generic metric classes and categorizes existing metrics. Future research will focus on applying cost benefit analysis methodologies to metric selection. Five main metric classes have been identified that apply to supervisory control teams composed of humans and autonomous platforms: mission effectiveness, autonomous platform behavior efficiency, human behavior efficiency, human behavior precursors, and collaborative metrics. Mission effectiveness measures how well the mission goals are achieved. Autonomous platform and human behavior efficiency measure the actions and decisions made by the humans and the automation that compose the team. Human behavior precursors measure human initial state, including certain attitudes and cognitive constructs that can be the cause of and drive a given behavior. Collaborative metrics address three different aspects of collaboration: collaboration between the human and the autonomous platform he is controlling, collaboration among humans that compose the team, and autonomous collaboration among platforms. These five metric classes have been populated with metrics and measuring techniques from the existing literature. Which specific metrics should be used to evaluate a system will depend on many factors, but as a rule-of-thumb, we propose that at a minimum, one metric from each class should be used to provide a multi-dimensional assessment of the human-automation team. To determine what the impact on our research has been by not following such a principled approach, we evaluated recent large-scale supervisory control experiments conducted in the MIT Humans and Automation Laboratory. The results show that prior to adapting this metric classification approach, we were fairly consistent in measuring mission effectiveness and human behavior through such metrics as reaction times and decision accuracies. However, despite our supervisory control focus, we were remiss in gathering attention allocation metrics and collaboration metrics, and we often gathered too many correlated metrics that were redundant and wasteful. This meta-analysis of our experimental shortcomings reflect those in the general research population in that we tended to gravitate to popular metrics that are relatively easy to gather, without a clear understanding of exactly what aspect of the systems we were measuring and how the various metrics informed an overall research question
    • …
    corecore