4,518 research outputs found

    Buzz or Beep? How Mode of Alert Influences Driver Takeover Following Automation Failure

    Get PDF
    abstract: Highly automated vehicles require drivers to remain aware enough to takeover during critical events. Driver distraction is a key factor that prevents drivers from reacting adequately, and thus there is need for an alert to help drivers regain situational awareness and be able to act quickly and successfully should a critical event arise. This study examines two aspects of alerts that could help facilitate driver takeover: mode (auditory and tactile) and direction (towards and away). Auditory alerts appear to be somewhat more effective than tactile alerts, though both modes produce significantly faster reaction times than no alert. Alerts moving towards the driver also appear to be more effective than alerts moving away from the driver. Future research should examine how multimodal alerts differ from single mode, and see if higher fidelity alerts influence takeover times.Dissertation/ThesisMasters Thesis Human Systems Engineering 201

    How to keep drivers engaged while supervising driving automation? A literature survey and categorization of six solution areas

    Get PDF
    This work aimed to organise recommendations for keeping people engaged during human supervision of driving automation, encouraging a safe and acceptable introduction of automated driving systems. First, heuristic knowledge of human factors, ergonomics, and psychological theory was used to propose solution areas to human supervisory control problems of sustained attention. Driving and non-driving research examples were drawn to substantiate the solution areas. Automotive manufacturers might (1) avoid this supervisory role altogether, (2) reduce it in objective ways or (3) alter its subjective experiences, (4) utilize conditioning learning principles such as with gamification and/or selection/training techniques, (5) support internal driver cognitive processes and mental models and/or (6) leverage externally situated information regarding relations between the driver, the driving task, and the driving environment. Second, a cross-domain literature survey of influential human-automation interaction research was conducted for how to keep engagement/attention in supervisory control. The solution areas (via numeric theme codes) were found to be reliably applied from independent rater categorisations of research recommendations. Areas (5) and (6) were addressed by around 70% or more of the studies, areas (2) and (4) in around 50% of the studies, and areas (3) and (1) in less than around 20% and 5%, respectively. The present contribution offers a guiding organisational framework towards improving human attention while supervising driving automation.submittedVersio

    Identifying mode confusion potential in software design

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2000.Includes bibliographical references (leaves 53-54).by Mario A. Rodríguez.S.M

    Assessing V and V Processes for Automation with Respect to Vulnerabilities to Loss of Airplane State Awareness

    Get PDF
    Automation has contributed substantially to the sustained improvement of aviation safety by minimizing the physical workload of the pilot and increasing operational efficiency. Nevertheless, in complex and highly automated aircraft, automation also has unintended consequences. As systems become more complex and the authority and autonomy (A&A) of the automation increases, human operators become relegated to the role of a system supervisor or administrator, a passive role not conducive to maintaining engagement and airplane state awareness (ASA). The consequence is that flight crews can often come to over rely on the automation, become less engaged in the human-machine interaction, and lose awareness of the automation mode under which the aircraft is operating. Likewise, the complexity of the system and automation modes may lead to poor understanding of the interaction between a mode of automation and a particular system configuration or phase of flight. These and other examples of mode confusion often lead to mismanaging the aircraft"TM"s energy state or the aircraft deviating from the intended flight path. This report examines methods for assessing whether, and how, operational constructs properly assign authority and autonomy in a safe and coordinated manner, with particular emphasis on assuring adequate airplane state awareness by the flight crew and air traffic controllers in off-nominal and/or complex situations

    Enhancing user experience and safety in the context of automated driving through uncertainty communication

    Get PDF
    Operators of highly automated driving systems may exhibit behaviour characteristic of overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. Previous research has indicated that conveying system uncertainties may alleviate these issues. However, existing approaches require drivers to attend the uncertainty information with focal attention, likely resulting in missed changes when engaged in non-driving-related tasks. This research project expands on existing work regarding uncertainty communication in the context of automated driving. Specifically, it aims to investigate the implications of conveying uncertainties under consideration of non-driving-related tasks and, based on the outcomes, develop and evaluate an uncertainty display that enhances both user experience and driving safety. In a first step, the impact of visually conveying uncertainties was investigated under consideration of workload, trust, monitoring behaviour, non-driving-related tasks, takeover performance, and situation awareness. For this, an anthropomorphic visual uncertainty display located in the instrument cluster was developed. While the hypothesised benefits for trust calibration and situation awareness were confirmed, the results indicate that visually conveying uncertainties leads to an increased perceived effort due to a higher frequency of monitoring glances. Building on these findings, peripheral awareness displays were explored as a means for conveying uncertainties without the need for focused attention to reduce monitoring glances. As a prerequisite for developing such a display, a systematic literature review was conducted to identify evaluation methods and criteria, which were then coerced into a comprehensive framework. Grounded in this framework, a peripheral awareness display for uncertainty communication was developed and subsequently compared with the initially proposed visual anthropomorphic uncertainty display in a driving simulator study. Eye tracking and subjective workload data indicate that the peripheral awareness display reduces the monitoring effort relative to the visual display, while driving performance and trust data highlight that the benefits of uncertainty communication are maintained. Further, this research project addresses the implications of increasing the functional detail of uncertainty information. Results of a driving simulator study indicate that particularly workload should be considered when increasing the functional detail of uncertainty information. Expanding upon this approach, an augmented reality display concept was developed and a set of visual variables was explored in a forced choice sorting task to assess their ordinal characteristics. Particularly changes in colour hue and animation-based variables received high preference ratings and were ordered consistently from low to high uncertainty. This research project has contributed a series of novel insights and ideas to the field of human factors in automated driving. It confirmed that conveying uncertainties improves trust calibration and situation awareness, but highlighted that using a visual display lessens the positive effects. Addressing this shortcoming, a peripheral awareness display was designed applying a dedicated evaluation framework. Compared with the previously employed visual display, it decreased monitoring glances and, consequentially, perceived effort. Further, an augmented reality-based uncertainty display concept was developed to minimise the workload increments associated with increases in the functional detail of uncertainty information.</div

    Survey of Human Models for Verification of Human-Machine Systems

    Full text link
    We survey the landscape of human operator modeling ranging from the early cognitive models developed in artificial intelligence to more recent formal task models developed for model-checking of human machine interactions. We review human performance modeling and human factors studies in the context of aviation, and models of how the pilot interacts with automation in the cockpit. The purpose of the survey is to assess the applicability of available state-of-the-art models of the human operators for the design, verification and validation of future safety-critical aviation systems that exhibit higher-level of autonomy, but still require human operators in the loop. These systems include the single-pilot aircraft and NextGen air traffic management. We discuss the gaps in existing models and propose future research to address them

    Modeling and Evaluating Pilot Performance in NextGen: Review of and Recommendations Regarding Pilot Modeling Efforts, Architectures, and Validation Studies

    Get PDF
    NextGen operations are associated with a variety of changes to the national airspace system (NAS) including changes to the allocation of roles and responsibilities among operators and automation, the use of new technologies and automation, additional information presented on the flight deck, and the entire concept of operations (ConOps). In the transition to NextGen airspace, aviation and air operations designers need to consider the implications of design or system changes on human performance and the potential for error. To ensure continued safety of the NAS, it will be necessary for researchers to evaluate design concepts and potential NextGen scenarios well before implementation. One approach for such evaluations is through human performance modeling. Human performance models (HPMs) provide effective tools for predicting and evaluating operator performance in systems. HPMs offer significant advantages over empirical, human-in-the-loop testing in that (1) they allow detailed analyses of systems that have not yet been built, (2) they offer great flexibility for extensive data collection, (3) they do not require experimental participants, and thus can offer cost and time savings. HPMs differ in their ability to predict performance and safety with NextGen procedures, equipment and ConOps. Models also vary in terms of how they approach human performance (e.g., some focus on cognitive processing, others focus on discrete tasks performed by a human, while others consider perceptual processes), and in terms of their associated validation efforts. The objectives of this research effort were to support the Federal Aviation Administration (FAA) in identifying HPMs that are appropriate for predicting pilot performance in NextGen operations, to provide guidance on how to evaluate the quality of different models, and to identify gaps in pilot performance modeling research, that could guide future research opportunities. This research effort is intended to help the FAA evaluate pilot modeling efforts and select the appropriate tools for future modeling efforts to predict pilot performance in NextGen operations

    A Predictive Model for Human-Unmanned Vehicle Systems : Final Report

    Get PDF
    Advances in automation are making it possible for a single operator to control multiple unmanned vehicles (UVs). This capability is desirable in order to reduce the operational costs of human-UV systems (HUVS), extend human capabilities, and improve system effectiveness. However, the high complexity of these systems introduces many significant challenges to system designers. To help understand and overcome these challenges, high-fidelity computational models of the HUVS must be developed. These models should have two capabilities. First, they must be able to describe the behavior of the various entities in the team, including both the human operator and the UVs in the team. Second, these models must have the ability to predict how changes in the HUVS and its mission will alter the performance characteristics of the system. In this report, we describe our work toward developing such a model. Via user studies, we show that our model has the ability to describe the behavior of a HUVS consisting of a single human operator and multiple independent UVs with homogeneous capabilities. We also evaluate the model’s ability to predict how changes in the team size, the human-UV interface, the UV’s autonomy levels, and operator strategies affect the system’s performance.Prepared for MIT Lincoln Laborator
    • …
    corecore