3 research outputs found

    State Event Models for the Formal Analysis of Human-Machine Interactions

    Get PDF
    The work described in this paper was motivated by our experience with applying a framework for formal analysis of human-machine interactions (HMI) to a realistic model of an autopilot. The framework is built around a formally defined conformance relation called "fullcontrol" between an actual system and the mental model according to which the system is operated. Systems are well-designed if they can be described by relatively simple, full-control, mental models for their human operators. For this reason, our framework supports automated generation of minimal full-control mental models for HMI systems, where both the system and the mental models are described as labelled transition systems (LTS). The autopilot that we analysed has been developed in the NASA Ames HMI prototyping tool ADEPT. In this paper, we describe how we extended the models that our HMI analysis framework handles to allow adequate representation of ADEPT models. We then provide a property-preserving reduction from these extended models to LTSs, to enable application of our LTS-based formal analysis algorithms. Finally, we briefly discuss the analyses we were able to perform on the autopilot model with our extended framework

    Formal Analysis of Pilot Error with Agent Safety Logic

    Get PDF
    In this paper, we show that modal logic is a valuable tool for the formal analysis of human errors in aviation safety. We develop a modal logic called Agent Safety Logic (ASL), based on epistemic logic, doxastic logic, and a safety logic grounded in a ight safety manual. We identify a class of human error that has contributed to several aviation incidents involving a specific kind of pilot knowledge failure, and formally analyze it. The use of ASL suggests how future avionics might increase aircraft safety

    A formal framework for the analysis of human-machine interactions

    No full text
    There are more and more automated systems and people are led to interact with them everyday. They are also increasingly complex and exhibit more and more "smart" behaviour. One direct consequence is that it becomes harder for the human operators to drive those systems safely for both the system and the user. Due to that increasing complexity, interactions between users and automated systems are more likely to be error-prone. In particular, inadequately designed interactions may result in the user being surprised while interacting with the system. Several accidents are due to such surprising situations, as it can be testified by real accidents, including the Three Mile Island nuclear meltdown, the lethal radiation doses administered by the Therac 25 medical device or the shutdown of the aircraft of the KAL007 flight. Human-Computer Interaction (HCI) has been studied for many years by researchers from various fields including psychology, human factors and ergonomics. This thesis follows a recent research direction that considers the use of formal methods to analyse the behavioural aspects of HMI. The focus is put on the actions or events exchanged between an operator and the system being used during an interaction. The work of this thesis builds on its initial inspiration from the recent work of Degani and Heymann that addressed the problem of automatically generating an adequate user interfaces for a given system model. In their work, an adequate user interface refers to one ensuring that potential mode confusion is avoided for the operator. The main contribution of this thesis is an analysis framework supported by formal methods that can be used to assess whether a system model is prone to potential automation surprises when being used by a human operator. The thesis develops a formalisation of automation surprises. It proposes and precisely characterises the full-control property that captures the fact that interactions between a system and its operator are free of potential automation surprises. It also defines a property, the full-control determinism, that guarantees the existence of a full-control conceptual model for a given system model. The thesis also defines precisely the minimal full-control conceptual model generation problem. The problem consists in finding a minimal conceptual model of the system model that allows full-control of it, which is only possible for fc-deterministic system models. Such full-control conceptual models can be used to generate artefacts to help the user to better understand them, such as user and training manuals. Three algorithms are proposed to solve the generation problem. The first one is based on Three-Valued Deterministic Finite Automata (3DFA) that are used to characterise the full-control property in terms of traces. The second one is based on a reduction approach inspired by the Paige-Tarjan algorithm that solves coarsest partition problems. Finally, the third one is based on an active learning approach based on the L* algorithm. The three proposed algorithms have been analysed for correctness and time complexity considerations. Moreover, the proposed framework, and therefore the proposed algorithms, have also been tested on various examples among which a large case study of an autopilot. That latter case study comes from ADEPT, a toolset to support designers in the early phases of the design of automation interfaces. That case study also shows how the proposed methodology could be integrated with ADEPT.(FSA - Sciences de l'ingénieur) -- UCL, 201
    corecore