7 research outputs found

    Blaming in component-based real-time systems

    Get PDF
    International audienceIn component-based safety-critical real-time systems it is crucial to determine which com-ponent(s) caused the violation of a required system-level safety property, be it to issue a precise alert, or to determine liability of component providers. In this paper we present an approach for blaming in real-time systems whose component specifications are given as timed automata. The analysis is based on a single execution trace violating a safety property P. We formalize blaming using counterfactual reasoning ("what would have been the outcome if component C had behaved correctly?") to distinguish component failures that actually con-tributed to the outcome from failures that had no impact on the violation of P. We then show how to effectively implement blaming by reducing it to a model-checking problem for timed automata, and demonstrate the feasibility of our approach on the models of a pacemaker and of a chemical reactor

    A general framework for blaming in component-based systems

    Get PDF
    International audienceIn component-based safety-critical embedded systems it is crucial to determine the cause(s) of the violation of a safety property, be it to issue a precise alert, to steer the system into a safe state, or to determine liability of component providers. In this paper we present an approach to blame components based on a single execution trace violating a safety property P. The diagnosis relies on counterfactual reasoning (" what would have been the outcome if component C had behaved correctly? ") to distinguish component failures that actually contributed to the outcome from failures that had little or no impact on the violation of P

    Program Actions as Actual Causes: A Building Block for Accountability

    Get PDF
    Abstract-Protocols for tasks such as authentication, electronic voting, and secure multiparty computation ensure desirable security properties if agents follow their prescribed programs. However, if some agents deviate from their prescribed programs and a security property is violated, it is important to hold agents accountable by determining which deviations actually caused the violation. Motivated by these applications, we initiate a formal study of program actions as actual causes. Specifically, we define in an interacting program model what it means for a set of program actions to be an actual cause of a violation. We present a sound technique for establishing program actions as actual causes. We demonstrate the value of this formalism in two ways. First, we prove that violations of a specific class of safety properties always have an actual cause. Thus, our definition applies to relevant security properties. Second, we provide a cause analysis of a representative protocol designed to address weaknesses in the current public key certification infrastructure

    Debugging of Behavioural Models using Counterexample Analysis

    Get PDF
    International audienceModel checking is an established technique for automatically verifying that a model satisfies a given temporal property. When the model violates the property, the model checker returns a counterexample, which is a sequence of actions leading to a state where the property is not satisfied. Understanding this counterexample for debugging the specification is a complicated task for several reasons: (i) the counterexample can contain a large number of actions, (ii) the debugging task is mostly achieved manually, and (iii) the counterexample does not explicitly highlight the source of the bug that is hidden in the model. This article presents a new approach that improves the usability of model checking by simplifying the comprehension of counterexamples. To do so, we first extract in the model all the counterexamples. Second, we define an analysis algorithm that identifies actions that make the model skip from incorrect to correct behaviours, making these actions relevant from a debugging perspective. Third, we develop a set of abstraction techniques to extract these actions from counterexamples. Our approach is fully automated by a tool we implemented and was applied on real-world case studies from various application areas for evaluation purposes

    Accountability in Security Protocols

    Get PDF
    A promising paradigm in protocol design is to hold parties accountable for misbehavior, instead of postulating that they are trustworthy. Recent approaches in defining this property, called accountability, characterized malicious behavior as a deviation from the protocol that causes a violation of the desired security property, but did so under the assumption that all deviating parties are controlled by a single, centralized adversary. In this work, we investigate the setting where multiple parties can deviate with or without coordination in a variant of the applied-pi calculus. We first demonstrate that, under realistic assumptions, it is impossible to determine all misbehaving parties; however, we show that accountability can be relaxed to exclude causal dependencies that arise from the behavior of deviating parties, and not from the protocol as specified. We map out the design space for the relaxation, point out protocol classes separating these notions and define conditions under which we can guarantee fairness and completeness. Most importantly, we discover under which circumstances it is correct to consider accountability in the single-adversary setting, where this property can be verified with off-the-shelf protocol verification tools

    A General Trace-Based Framework of Logical Causality

    Get PDF
    In component-based safety-critical embedded systems it is crucial to determine the cause(s) of the violation of a safety property, be it to issue a precise alert or to determine liability of component providers. In this paper we present an approach to blame components based on a single execution trace violating a safety property P. The diagnosis relies on counterfactual reasoning (what would have been the outcome if component C had behaved correctly?) to distinguish component failures that actually contributed to the outcome from failures that had little or no impact on the violation of P
    corecore