1,177 research outputs found

    Current Concepts and Trends in Human-Automation Interaction

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.The purpose of this panel was to provide a general overview and discussion of some of the most current and controversial concepts and trends in human-automation interaction. The panel was composed of eight researchers and practitioners. The panelists are well-known experts in the area and offered differing views on a variety of different human-automation topics. The range of concepts and trends discussed in this panel include: general taxonomies regarding stages and levels of automation and function allocation, individualized adaptive automation, automation-induced complacency, economic rationality and the use of automation, the potential utility of false alarms, the influence of different types of false alarms on trust and reliance, and a system-wide theory of trust in multiple automated aids

    Cyber security for smart grid: a human-automation interaction framework

    Get PDF
    Abstract-- Power grid cyber security is turning into a vital concern, while we are moving from the traditional power grid toward modern Smart Grid (SG). To achieve the smart grid objectives, development of Information Technology (IT) infrastructure and computer based automation is necessary. This development makes the smart grid more prone to the cyber attacks. This paper presents a cyber security strategy for the smart grid based on Human Automation Interaction (HAI) theory and especially Adaptive Autonomy (AA) concept. We scheme an adaptive Level of Automation (LOA) for Supervisory Control and Data Acquisition (SCADA) systems. This level of automation will be adapted to some environmental conditions which are presented in this paper. The paper presents a brief background, methodology (methodology design), implementation and discussions. Index Terms—smart grid, human automation interaction, adaptive autonomy, cyber security, performance shaping facto

    A Toolset for Supporting Iterative Human Automation: Interaction in Design

    Get PDF
    The addition of automation has greatly extended humans' capability to accomplish tasks, including those that are difficult, complex and safety critical. The majority of Human - Automation Interact~on (HAl) results in more efficient and safe operations, ho,,:,ever ~ertain un~~pected a~tomatlon behaviors or "automation surprises" can be frustrating and, In certain safety critical operations (e.g. transport~tion, manufacturing control, medicine), may result in injuries or. the loss of life.. (Mellor, 1994; Leveson, 1995; FAA, 1995; BASI, 1998; Sheridan, 2002). This pap~r describes ~he development of a design tool that enables on the rapid development and evaluation. of automat~on prototypes. The ultimate goal of the work is to provide a design platform upon which automation surprise vulnerability analyses can be integrated

    The automation design advisor tool (ADAT): Development and validation of a model‐based tool to support flight deck automation design for nextgen operations

    Full text link
    NextGen aviation will require an even greater reliance on automation than current‐day operations. Therefore, systems with problems in human–automation interaction must be identified and resolved early, well before they are introduced into operation. This paper describes a research and software development effort to build a prototype automation design advisor tool (ADAT) for flight deck automation. This tool uses models of human performance to identify perceptual, cognitive, and action‐related inefficiencies in the design of flight management systems. Aviation designers can use the tool to evaluate and compare potential flight deck automation designs and to identify potential human–automation interaction concerns. Designers can compare different flight management systems in terms of specific features and their ability to support pilot performance. ADAT provides specific, research‐based guidance for resolving problematic design issues. It was specifically designed to be flexible enough for both current‐day technologies and revolutionary NextGen designs. © 2012 Wiley Periodicals, Inc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/92456/1/20389_ftp.pd

    Converging Measures and an Emergent Model: A Meta-Analysis of Human-Automation Trust Questionnaires

    Full text link
    A significant challenge to measuring human-automation trust is the amount of construct proliferation, models, and questionnaires with highly variable validation. However, all agree that trust is a crucial element of technological acceptance, continued usage, fluency, and teamwork. Herein, we synthesize a consensus model for trust in human-automation interaction by performing a meta-analysis of validated and reliable trust survey instruments. To accomplish this objective, this work identifies the most frequently cited and best-validated human-automation and human-robot trust questionnaires, as well as the most well-established factors, which form the dimensions and antecedents of such trust. To reduce both confusion and construct proliferation, we provide a detailed mapping of terminology between questionnaires. Furthermore, we perform a meta-analysis of the regression models that emerged from those experiments which used multi-factorial survey instruments. Based on this meta-analysis, we demonstrate a convergent experimentally validated model of human-automation trust. This convergent model establishes an integrated framework for future research. It identifies the current boundaries of trust measurement and where further investigation is necessary. We close by discussing choosing and designing an appropriate trust survey instrument. By comparing, mapping, and analyzing well-constructed trust survey instruments, a consensus structure of trust in human-automation interaction is identified. Doing so discloses a more complete basis for measuring trust emerges that is widely applicable. It integrates the academic idea of trust with the colloquial, common-sense one. Given the increasingly recognized importance of trust, especially in human-automation interaction, this work leaves us better positioned to understand and measure it.Comment: 44 pages, 6 figures. Submitted, in part, to ACM Transactions on Human-Robot Interaction (THRI

    AAGLMES: an intelligent expert system realization of adaptive autonomy using generalized linear models

    Get PDF
    Abstract—We earlier introduced a novel framework for realization of Adaptive Autonomy (AA) in human-automation interaction (HAI). This study presents an expert system for realization of AA, using Support Vector Machine (SVM), referred to as Adaptive Autonomy Support Vector Machine Expert System (AASVMES). The proposed system prescribes proper Levels of Automation (LOAs) for various environmental conditions, here modeled as Performance Shaping Factors (PSFs), based on the extracted rules from the experts’ judgments. SVM is used as an expert system inference engine. The practical list of PSFs and the judgments of GTEDC’s (the Greater Tehran Electric Distribution Company) experts are used as expert system database. The results of implemented AASVMES in response to GTEDC’s network are evaluated against the GTEDC experts’ judgment. Evaluations show that AASVMES has the ability to predict the proper LOA for GTEDC’s Utility Management Automation (UMA) system, which changes in relevance to the changes in PSFs; thus providing an adaptive LOA scheme for UMA. Keywords-Support Vector Machine (SVM); Adaptive Autonomy (AA); Expert System; Human Automation Interaction (HAI); Experts’ Judgment; Power System; Distribution Automation; Smart Grid

    Automation transparency: Implications of uncertainty communication for human-automation interaction and interfaces

    Get PDF
    Operators of highly automated driving systems may exhibit behaviour characteristic for overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. A driving simulator study was used to assess the impact of dynamically communicating system uncertainties on monitoring, trust, workload, takeovers, and physiological responses. The uncertainty information was conveyed visually using a stylised heart beat combined with a numerical display and users were engaged in a visual search task. Multilevel analysis results suggest that uncertainty communication helps operators calibrate their trust and gain situation awareness prior to critical situations, resulting in safer takeovers. Additionally, eye tracking data indicate that operators can adjust their gaze behaviour in correspondence with the level of uncertainty. However, conveying uncertainties using a visual display significantly increases operator workload and impedes users in the execution of non-driving related tasks

    History and future of human-automation interaction

    Get PDF
    We review the history of human-automation interaction research, assess its current status and identify future directions. We start by reviewing articles that were published on this topic in the International Journal of Human-Computer Studies during the last 50 years. We find that over the years, automated systems have been used more frequently (1) in time-sensitive or safety-critical settings, (2) in embodied and situated systems, and (3) by non-professional users. Looking to the future, there is a need for human-automation interaction research to focus on (1) issues of function and task allocation between humans and machines, (2) issues of trust, incorrect use, and confusion, (3) the balance between focus, divided attention and attention management, (4) the need for interdisciplinary approaches to cover breadth and depth, (5) regulation and explainability, (6) ethical and social dilemmas, (7) allowing a human and humane experience, and (8) radically different human-automation interaction

    Automation in Surgery: The Surgeons' Perspective on Human Factors Issues of Image-Guided Navigation

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Image-guided navigation (IGN) systems support the surgeon in navigating through the patients' anatomy. Previous research on IGN has focused on technical feasibility and clinical applications. Yet, as the introduction of IGN corresponds to a partial automation of the surgeon's task, well known issues of human-automation interaction might play a crucial role for the success of IGN as well. The present study represents a first attempt to assess the impact of IGN on four key issues of human automation-interaction, i.e., workload, situation awareness, trust, and skill degradation, from the surgeons' perspective. A nation-wide survey among 213 German surgeons from 94 different hospitals was conducted. Results revealed (1) a workload-shift due to IGN rather than a reduction of workload, (2) benefits of IGN with respect to situation awareness, (3) comparatively high levels of perceived reliability, trust and reliance, and (4) skill degradation as a possible risk, albeit only for inexperienced surgeons

    Human-automation interaction for lunar landing aimpoint redesignation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2008.Includes bibliographical references (leaves 86-89).Human-automation interactions are a critical area of research in systems with modem automation. The decision-making portion of tasks presents a special challenge for human-automation interactions because of the many factors that play a role in the decision-making process. This is prominent in human spaceflight, where the astronaut must continually interact with the vehicle systems. In future lunar landings, astronauts working in conjunction with automated systems will need to select a safe and achievable landing aimpoint. Ultimately, this decision could risk the safety of the astronauts and the success of their mission. Careful study is needed to ascertain the roles of both the human and the automation and how design can best support the decision making process. The task of landing on the moon was first achieved by the Apollo program in 1969, but technological advances will provide future landings with a greater variety and extensibility of mission goals. The modem task of selecting a landing aimpoint is known as landing point redesignation (LPR), and this work capitalizes on an existing LPR algorithm in order to explore the effects on landing point selection by altering the levels of automation. An experiment was designed to study the decision-making process with three different levels of automation. In addition, the effect of including a human-generated goal that was not captured by the automation was studied. The experimental results showed that the subjects generally used the same decision strategies across the different levels of automation, and that higher levels of automation were able to eliminate earlier parts of the decision strategy and allow the subjects to select a landing aimpoint more quickly. In scenarios with the additional human goal, subjects tended to sacrifice significant safety margins in order to achieve proximity to the point of interest. Higher levels of automation allowed them to maintain high levels of safety margins in addition to achieving their external goal. Thus, it is concluded that with a display design supporting human goals in a decision-making task, automated decision aids that make recommendations and assist communication of the automation's processes are highly beneficial.by Jennifer M. Heedham.S.M
    corecore