3 research outputs found

    State Event Models for the Formal Analysis of Human-Machine Interactions

    Get PDF
    The work described in this paper was motivated by our experience with applying a framework for formal analysis of human-machine interactions (HMI) to a realistic model of an autopilot. The framework is built around a formally defined conformance relation called "fullcontrol" between an actual system and the mental model according to which the system is operated. Systems are well-designed if they can be described by relatively simple, full-control, mental models for their human operators. For this reason, our framework supports automated generation of minimal full-control mental models for HMI systems, where both the system and the mental models are described as labelled transition systems (LTS). The autopilot that we analysed has been developed in the NASA Ames HMI prototyping tool ADEPT. In this paper, we describe how we extended the models that our HMI analysis framework handles to allow adequate representation of ADEPT models. We then provide a property-preserving reduction from these extended models to LTSs, to enable application of our LTS-based formal analysis algorithms. Finally, we briefly discuss the analyses we were able to perform on the autopilot model with our extended framework

    A Method for Revealing and Addressing Security Vulnerabilities in Cyber-physical Systems by Modeling Malicious Agent Interactions with Formal Verification

    Get PDF
    Several cyber-attacks on the cyber-physical systems (CPS) that monitor and control critical infrastructure were publically announced over the last few years. Almost without exception, the proposed security solutions focus on preventing unauthorized access to the industrial control systems (ICS) at various levels – the defense in depth approach. While useful, it does not address the problem of making the systems more capable of responding to the malicious actions of an attacker once they have gained access to the system. The first step in making an ICS more resilient to an attacker is identifying the cyber security vulnerabilities the attacker can use during system design. This paper presents a method that reveals cyber security vulnerabilities in ICS through the formal modeling of the system and malicious agents. The inclusion of the malicious agent in the analysis of an existing systems identifies security vulnerabilities that are missed in traditional functional model checking

    State Event Models for the Formal Analysis of Human-Machine Interactions

    No full text
    The work described in this paper was motivated by our experience with applying a framework for formal analysis of human-machine interactions (HMI) to a realistic model of an autopilot. The framework is built around a formally defined conformance relation called “full-control” between an actual system and the mental model according to which the system is operated. Systems are well-designed if they can be described by relatively simple, full-control, mental models for their human operators. For this reason, our framework supports automated generation of minimal full-control mental models for HMI systems, where both the system and the mental models are described as labelled transition systems (LTS). The autopilot that we analysed has been developed in the NASA Ames HMI prototyping tool ADEPT. In this paper, we describe how we extended the models that our HMI analysis framework handles to allow adequate representation of ADEPT models. We then provide a property-preserving reduction from these extended models to LTSs, to enable application of our LTS-based formal analysis algorithms. Finally, we briefly discuss the analyses we were able to perform on the autopilot model with our extended framework
    corecore