10,318 research outputs found

    Aviation Automation and CNS/ATM-related Human-Technology Interface: ATSEP Competency Considerations

    Get PDF
    Abstract The aviation industry has, no doubt, undergone profound transformations ever since the first powered aircraft flight on December 17, 1903. An especially noticeable aspect of the transformations is in the area of automation. Remarkably, aviation operations are becoming increasingly automated and it is expected that the wind of change sweeping through the industry will be getting stormier as new technologies emerge especially within the context of the emerging prospects of intelligent technologies, which may ultimately enthrone complete automated or technology-based intelligent decision making. Perhaps, in no sphere of the aviation system has there been, in recent times, a much more lively and sustained exhibition of the spirit of automation than in the realm of communications, navigation, surveillance/air traffic management (CNS/ATM). This scenario, invariably, imposes far-reaching obligations on and have wide-ranging implications for air traffic safety electronics personnel (ATSEP) – the ICAO-recognized nomenclature for personnel involved and proven competent in the installation, operation, and/or maintenance of a CNS/ATM system. This paper explores, based on a systematic review of extant literature, the concept of aviation automation in the context of the broader conceptual and theoretical underpinnings of automation and with an emphasis on automated CNS/ATM systems. The primary aim is to examine the implications of an automated CNS/ATM environment on aspects relating to the roles, tasks, competence, and training of ATSEP within the framework of the safety-criticality of air traffic management. Based on arguments regarding ATSEP competency considerations in the context of an automation-rich CNS/ATM environment, a conceptual model of ATSEP competencies and a model of competency-based, human-technology ATSEP task flow are proposed

    Autonomous, Context-Sensitive, Task Management Systems and Decision Support Tools I: Human-Autonomy Teaming Fundamentals and State of the Art

    Get PDF
    Recent advances in artificial intelligence, machine learning, data mining and extraction, and especially in sensor technology have resulted in the availability of a vast amount of digital data and information and the development of advanced automated reasoners. This creates the opportunity for the development of a robust dynamic task manager and decision support tool that is context sensitive and integrates information from a wide array of on-board and off aircraft sourcesa tool that monitors systems and the overall flight situation, anticipates information needs, prioritizes tasks appropriately, keeps pilots well informed, and is nimble and able to adapt to changing circumstances. This is the first of two companion reports exploring issues associated with autonomous, context-sensitive, task management and decision support tools. In the first report, we explore fundamental issues associated with the development of an integrated, dynamic, flight information and automation management system. We discuss human factors issues pertaining to information automation and review the current state of the art of pilot information management and decision support tools. We also explore how effective human-human team behavior and expectations could be extended to teams involving humans and automation or autonomous systems

    Selecting Metrics to Evaluate Human Supervisory Control Applications

    Get PDF
    The goal of this research is to develop a methodology to select supervisory control metrics. This methodology is based on cost-benefit analyses and generic metric classes. In the context of this research, a metric class is defined as the set of metrics that quantify a certain aspect or component of a system. Generic metric classes are developed because metrics are mission-specific, but metric classes are generalizable across different missions. Cost-benefit analyses are utilized because each metric set has advantages, limitations, and costs, thus the added value of different sets for a given context can be calculated to select the set that maximizes value and minimizes costs. This report summarizes the findings of the first part of this research effort that has focused on developing a supervisory control metric taxonomy that defines generic metric classes and categorizes existing metrics. Future research will focus on applying cost benefit analysis methodologies to metric selection. Five main metric classes have been identified that apply to supervisory control teams composed of humans and autonomous platforms: mission effectiveness, autonomous platform behavior efficiency, human behavior efficiency, human behavior precursors, and collaborative metrics. Mission effectiveness measures how well the mission goals are achieved. Autonomous platform and human behavior efficiency measure the actions and decisions made by the humans and the automation that compose the team. Human behavior precursors measure human initial state, including certain attitudes and cognitive constructs that can be the cause of and drive a given behavior. Collaborative metrics address three different aspects of collaboration: collaboration between the human and the autonomous platform he is controlling, collaboration among humans that compose the team, and autonomous collaboration among platforms. These five metric classes have been populated with metrics and measuring techniques from the existing literature. Which specific metrics should be used to evaluate a system will depend on many factors, but as a rule-of-thumb, we propose that at a minimum, one metric from each class should be used to provide a multi-dimensional assessment of the human-automation team. To determine what the impact on our research has been by not following such a principled approach, we evaluated recent large-scale supervisory control experiments conducted in the MIT Humans and Automation Laboratory. The results show that prior to adapting this metric classification approach, we were fairly consistent in measuring mission effectiveness and human behavior through such metrics as reaction times and decision accuracies. However, despite our supervisory control focus, we were remiss in gathering attention allocation metrics and collaboration metrics, and we often gathered too many correlated metrics that were redundant and wasteful. This meta-analysis of our experimental shortcomings reflect those in the general research population in that we tended to gravitate to popular metrics that are relatively easy to gather, without a clear understanding of exactly what aspect of the systems we were measuring and how the various metrics informed an overall research question

    Evidence Report, Risk of Inadequate Design of Human and Automation/Robotic Integration

    Get PDF
    The success of future exploration missions depends, even more than today, on effective integration of humans and technology (automation and robotics). This will not emerge by chance, but by design. Both crew and ground personnel will need to do more demanding tasks in more difficult conditions, amplifying the costs of poor design and the benefits of good design. This report has looked at the importance of good design and the risks from poor design from several perspectives: 1) If the relevant functions needed for a mission are not identified, then designs of technology and its use by humans are unlikely to be effective: critical functions will be missing and irrelevant functions will mislead or drain attention. 2) If functions are not distributed effectively among the (multiple) participating humans and automation/robotic systems, later design choices can do little to repair this: additional unnecessary coordination work may be introduced, workload may be redistributed to create problems, limited human attentional resources may be wasted, and the capabilities of both humans and technology underused. 3) If the design does not promote accurate understanding of the capabilities of the technology, the operators will not use the technology effectively: the system may be switched off in conditions where it would be effective, or used for tasks or in contexts where its effectiveness may be very limited. 4) If an ineffective interaction design is implemented and put into use, a wide range of problems can ensue. Many involve lack of transparency into the system: operators may be unable or find it very difficult to determine a) the current state and changes of state of the automation or robot, b) the current state and changes in state of the system being controlled or acted on, and c) what actions by human or by system had what effects. 5) If the human interfaces for operation and control of robotic agents are not designed to accommodate the unique points of view and operating environments of both the human and the robotic agent, then effective human-robot coordination cannot be achieved

    Towards an Expert System for the Analysis of Computer Aided Human Performance

    Get PDF

    Architecting Human Operator Trust in Automation to Improve System Effectiveness in Multiple Unmanned Aerial Vehicles (UAV)

    Get PDF
    Current Unmanned Aerial System (UAS) designs require multiple operators for each vehicle, partly due to imperfect automation matched with the complex operational environment. This study examines the effectiveness of future UAS automation by explicitly addressing the human/machine trust relationship during system architecting. A pedigreed engineering model of trust between human and machine was developed and applied to a laboratory-developed micro-UAS for Special Operations. This unprecedented investigation answered three primary questions. Can previous research be used to create a useful trust model for systems engineering? How can trust be considered explicitly within the DoD Architecture Framework? Can the utility of architecting trust be demonstrated on a given UAS architecture? By addressing operator trust explicitly during architecture development, system designers can incorporate more effective automation. The results provide the Systems Engineering community a new modeling technique for early human systems integration

    Agent Transparency for Intelligent Target Identification in the Maritime Domain, and its impact on Operator Performance, Workload and Trust

    Get PDF
    This item is only available electronically.Objective: To examine how increasing the transparency of an intelligent maritime target identification system impacts on operator performance, workload and trust in the intelligent agent. Background: Previous research has shown that operator accuracy improves with increased transparency of an intelligent agent’s decisions and recommendations. This can be at the cost of increased workload and response time, although this has not been found by all studies. Prior studies have predominately focussed on route planning and navigation, and it is unclear if the benefits of agent transparency would apply to other tasks such as target identification. Method: Twenty seven participants were required to identify a number of tracks based on a set of identification criteria and the recommendation of an intelligent agent at three transparency levels in a repeated-measures design. The intelligent agent generated an identification recommendation for each track with different levels of transparency information displayed and participants were required to determine the identity of the track. For each transparency level, 70% of the recommendations made by the intelligent agent were correct, with incorrect recommendation due to additional information that the agent was not aware of, such as information from the ship’s radar. Participants’ identification accuracy and identification time were measured, and surveys on operator subjective workload and subjective trust in the intelligent agent were collected for each transparency level. Results: The results indicated that increased transparency information improved the operators’ sensitivity to the accuracy of the agent’s decisions and produced a greater tendency Agent Transparency for Intelligent Target Identification 33 to accept the agent’s decision. Increased agent transparency facilitated human-agent teaming without increasing workload or response time when correctly accepting the intelligent agent’s decision, but increased the response time when rejecting incorrect intelligent agent’s decisions. Participants also reported a higher level of trust when the intelligent agent was more transparent. Conclusion: This study shows the ability of agent transparency to improve performance without increasing workload. Greater agent transparency is also beneficial in building operator trust in the agent. Application: The current study can inform the design and use of uninhabited vehicles and intelligent agents in the maritime context for target identification. It also demonstrates that providing greater transparency of intelligent agents can improve human-agent teaming performance for a previously unstudied task and domain, and hence suggests broader applicability for the design of intelligent agents.Thesis (M.Psych(Organisational & Human Factors)) -- University of Adelaide, School of Psychology, 201

    Designing for Appropriate Reliance: The Roles of AI Uncertainty Presentation, Initial User Decision, and User Demographics in AI-Assisted Decision-Making

    Full text link
    Appropriate reliance is critical to achieving synergistic human-AI collaboration. For instance, when users over-rely on AI assistance, their human-AI team performance is bounded by the model's capability. This work studies how the presentation of model uncertainty may steer users' decision-making toward fostering appropriate reliance. Our results demonstrate that showing the calibrated model uncertainty alone is inadequate. Rather, calibrating model uncertainty and presenting it in a frequency format allow users to adjust their reliance accordingly and help reduce the effect of confirmation bias on their decisions. Furthermore, the critical nature of our skin cancer screening task skews participants' judgment, causing their reliance to vary depending on their initial decision. Additionally, step-wise multiple regression analyses revealed how user demographics such as age and familiarity with probability and statistics influence human-AI collaborative decision-making. We discuss the potential for model uncertainty presentation, initial user decision, and user demographics to be incorporated in designing personalized AI aids for appropriate reliance.Comment: Accepted to CSCW202

    Whose Drive Is It Anyway? Using Multiple Sequential Drives to Establish Patterns of Learned Trust, Error Cost, and Non-Active Trust Repair While Considering Daytime and Nighttime Differences as a Proxy for Difficulty

    Get PDF
    Semi-autonomous driving is a complex task domain with a broad range of problems to consider. The human operator’s role in semi-autonomous driving is crucial because safety and performance depends on how the operator interacts with the system. Drive difficulty has not been extensively studied in automated driving systems and thus is not well understood. Additionally, few studies have studied trust development, decline, or repair over multiple drives for automated driving systems. The goal of this study was to test the effect of perceived driving difficulty on human trust in the automation and how trust is dynamically learned, reduced due to automation errors, and repaired over a seven-drive series. The experiment used 2 task difficulty conditions (easy vs. difficult) x 3 error type conditions (no error, takeover request or TOR, failure) x 7 drives mixed design. Lighting condition was used as a proxy for driving difficulty because decreased visibility for potential hazards could make monitoring the road difficult. During the experiment, 122 undergraduate participants drove an automated vehicle seven times in either a daytime (i.e., “easy”) or nighttime (i.e., “difficult”) condition. Participants experienced a critical hazard event in the fourth drive, in which the automation perfectly avoided the hazard (“no error” condition), issued a takeover request (“TOR” condition), or failed to notice and respond to the hazard (“failure” condition). Participants completed trust ratings after each drive to establish trust development. Results showed that trust improved through the first three drives, demonstrating proper trust calibration. The TOR and automation failure conditions saw significant decreases in trust after the critical hazard in drive four, whereas trust was unaffected for the no error condition. Trust naturally repaired in the TOR and failure conditions after the critical event but did not recover to previous levels before the critical event. There was no evidence of perceived difficulty differences between the daytime and nighttime conditions. Thus, a consistent lack of trust differences was found between lighting conditions. This study demonstrated how trust develops and responds to errors in automated driving systems, informing future research for trust repair interventions and design of automated driving systems
    • …
    corecore