7 research outputs found

    Generating Diagnoses for Probabilistic Model Checking Using Causality

    Get PDF
    One of the most major advantages of Model checking over other formal methods of verification, its ability to generate an error trace in case of a specification falsified in the model. We call this trace a counterexample. However, understanding the counterexample is not that easy task, because model checker generates usually multiple counterexamples of long length, what makes the analysis of counterexample time-consuming as well as costly task. Therefore, counterexamples should be small and as indicative as possible to be understood. In probabilistic model checking (PMC) counterexample generation has a quantitative aspect.  The counterexample in PMC is a set of paths in which a path formula holds, and their accumulative probability mass violates the probability bound. In this paper, we address the complementary task of counterexample generation which is the counterexample diagnosis in PMC. We propose an aided-diagnostic method for probabilistic counterexamples based on the notion of causality and responsibility. Given a counterexample for a Probabilistic CTL (PCTL) formula that doesn’t hold over Discreet-Time-Markov-Chain (DTMC) model, this method guides the user to the most responsible causes in the counterexample.</p

    Effective verification of confidentiality for multi-threaded programs

    Get PDF
    This paper studies how confidentiality properties of multi-threaded programs can be verified efficiently by a combination of newly developed and existing model checking algorithms. In particular, we study the verification of scheduler-specific observational determinism (SSOD), a property that characterizes secure information flow for multi-threaded programs under a given scheduler. Scheduler-specificness allows us to reason about refinement attacks, an important and tricky class of attacks that are notorious in practice. SSOD imposes two conditions: (SSOD-1)~all individual public variables have to evolve deterministically, expressed by requiring stuttering equivalence between the traces of each individual public variable, and (SSOD-2)~the relative order of updates of public variables is coincidental, i.e., there always exists a matching trace. \ud \ud We verify the first condition by reducing it to the question whether all traces of \ud each public variable are stuttering equivalent. \ud To verify the second condition, we show how\ud the condition can be translated, via a series of steps, \ud into a standard strong bisimulation problem. \ud Our verification techniques can be easily\ud adapted to verify other formalizations of similar information flow properties.\ud \ud We also exploit counter example generation techniques to synthesize attacks for insecure programs that fail either SSOD-1 or SSOD-2, i.e., showing how confidentiality \ud of programs can be broken

    Systematically Debugging IoT Control System Correctness for Building Automation

    Get PDF
    ABSTRACT Advances and standards in Internet of Things (IoT) have simplified the realization of building automation. However, non-expert IoT users still lack tools that can help them to ensure the underlying control system correctness: userprogrammable logics match the user intention. In fact, nonexpert IoT users lack the necessary know-how of domain experts. This paper presents our experience in running a building automation service based on the Salus framework. Complementing efforts that simply verify the IoT control system correctness, Salus takes novel steps to tackle practical challenges in automated debugging of identified policy violations, for non-expert IoT users. First, Salus leverages formal methods to localize faulty user-programmable logics. Second, to debug these identified faults, Salus selectively transforms the control system logics into a set of parameterized equations, which can then be solved by popular model checking tools or SMT (Satisfiability Modulo Theories) solvers. Through office deployments, user studies, and public datasets, we demonstrate the usefulness of Salus in systematically debugging the correctness of IoT control systems for building automation

    Applications of Description Logic and Causality in Model Checking

    Get PDF
    Model checking is an automated technique for the verification of finite-state systems that is widely used in practice. In model checking, a model M is verified against a specification φ\varphi, exhaustively checking that the tree of all computations of M satisfies φ\varphi. When φ\varphi fails to hold in M, the negative result is accompanied by a counterexample: a computation in M that demonstrates the failure. State of the art model checkers apply Binary Decision Diagrams(BDDs) as well as satisfiability solvers for this task. However, both methods suffer from the state explosion problem, which restricts the application of model checking to only modestly sized systems. The importance of model checking makes it worthwhile to explore alternative technologies, in the hope of broadening the applicability of the technique to a wider class of systems. Description Logic (DL) is a family of knowledge representation formalisms based on decidable fragments of first order logic. DL is used mainly for designing ontologies in information systems. In recent years several DL reasoners have been developed, demonstrating an impressive capability to cope with very large ontologies. This work consists of two parts. In the first we harness the growing ability of DL reasoners to solve model checking problems. We show how DL can serve as a natural setting for representing and solving a model checking problem, and present a variety of encodings that translate such problems into consistency queries in DL. Experimental results, using the Description Logic reasoner FaCT++, demonstrate that for some systems and properties, our method can outperform existing ones. In the second part we approach a different aspect of model checking. When a specification fails to hold in a model and a counterexample is presented to the user, the counterexample may itself be complex and difficult to understand. We propose an automatic technique to find the computation steps and their associated variable values, that are of particular importance in generating the counterexample. We use the notion of causality to formally define a set of causes for the failure of the specification on the given counterexample. We give a linear-time algorithm to detect the causes, and we demonstrate how these causes can be presented to the user as a visual explanation of the failure

    Controlling the Generation of Multiple Counterexamples in LTL Model Checking

    Get PDF
    corecore