4 research outputs found

    Blaming in component-based real-time systems

    Get PDF
    International audienceIn component-based safety-critical real-time systems it is crucial to determine which com-ponent(s) caused the violation of a required system-level safety property, be it to issue a precise alert, or to determine liability of component providers. In this paper we present an approach for blaming in real-time systems whose component specifications are given as timed automata. The analysis is based on a single execution trace violating a safety property P. We formalize blaming using counterfactual reasoning ("what would have been the outcome if component C had behaved correctly?") to distinguish component failures that actually con-tributed to the outcome from failures that had no impact on the violation of P. We then show how to effectively implement blaming by reducing it to a model-checking problem for timed automata, and demonstrate the feasibility of our approach on the models of a pacemaker and of a chemical reactor

    Counterfactual Causality for Reachability and Safety based on Distance Functions

    Full text link
    Investigations of causality in operational systems aim at providing human-understandable explanations of why a system behaves as it does. There is, in particular, a demand to explain what went wrong on a given counterexample execution that shows that a system does not satisfy a given specification. To this end, this paper investigates a notion of counterfactual causality in transition systems based on Stalnaker's and Lewis' semantics of counterfactuals in terms of most similar possible worlds and introduces a novel corresponding notion of counterfactual causality in two-player games. Using distance functions between paths in transition systems, this notion defines whether reaching a certain set of states is a cause for the violation of a reachability or safety property. Similarly, using distance functions between memoryless strategies in reachability and safety games, it is defined whether reaching a set of states is a cause for the fact that a given strategy for the player under investigation is losing. The contribution of the paper is two-fold: In transition systems, it is shown that counterfactual causality can be checked in polynomial time for three prominent distance functions between paths. In two-player games, the introduced notion of counterfactual causality is shown to be checkable in polynomial time for two natural distance functions between memoryless strategies. Further, a notion of explanation that can be extracted from a counterfactual cause and that pinpoints changes to be made to the given strategy in order to transform it into a winning strategy is defined. For the two distance functions under consideration, the problem to decide whether such an explanation imposes only minimal necessary changes to the given strategy with respect to the used distance function turns out to be coNP-complete and not to be solvable in polynomial time if P is not equal to NP, respectively.Comment: This is the extended version of a paper accepted for publication at GandALF 202

    Applications of Description Logic and Causality in Model Checking

    Get PDF
    Model checking is an automated technique for the verification of finite-state systems that is widely used in practice. In model checking, a model M is verified against a specification φ\varphi, exhaustively checking that the tree of all computations of M satisfies φ\varphi. When φ\varphi fails to hold in M, the negative result is accompanied by a counterexample: a computation in M that demonstrates the failure. State of the art model checkers apply Binary Decision Diagrams(BDDs) as well as satisfiability solvers for this task. However, both methods suffer from the state explosion problem, which restricts the application of model checking to only modestly sized systems. The importance of model checking makes it worthwhile to explore alternative technologies, in the hope of broadening the applicability of the technique to a wider class of systems. Description Logic (DL) is a family of knowledge representation formalisms based on decidable fragments of first order logic. DL is used mainly for designing ontologies in information systems. In recent years several DL reasoners have been developed, demonstrating an impressive capability to cope with very large ontologies. This work consists of two parts. In the first we harness the growing ability of DL reasoners to solve model checking problems. We show how DL can serve as a natural setting for representing and solving a model checking problem, and present a variety of encodings that translate such problems into consistency queries in DL. Experimental results, using the Description Logic reasoner FaCT++, demonstrate that for some systems and properties, our method can outperform existing ones. In the second part we approach a different aspect of model checking. When a specification fails to hold in a model and a counterexample is presented to the user, the counterexample may itself be complex and difficult to understand. We propose an automatic technique to find the computation steps and their associated variable values, that are of particular importance in generating the counterexample. We use the notion of causality to formally define a set of causes for the failure of the specification on the given counterexample. We give a linear-time algorithm to detect the causes, and we demonstrate how these causes can be presented to the user as a visual explanation of the failure
    corecore