4,598 research outputs found

    Transient Reward Approximation for Continuous-Time Markov Chains

    Full text link
    We are interested in the analysis of very large continuous-time Markov chains (CTMCs) with many distinct rates. Such models arise naturally in the context of reliability analysis, e.g., of computer network performability analysis, of power grids, of computer virus vulnerability, and in the study of crowd dynamics. We use abstraction techniques together with novel algorithms for the computation of bounds on the expected final and accumulated rewards in continuous-time Markov decision processes (CTMDPs). These ingredients are combined in a partly symbolic and partly explicit (symblicit) analysis approach. In particular, we circumvent the use of multi-terminal decision diagrams, because the latter do not work well if facing a large number of different rates. We demonstrate the practical applicability and efficiency of the approach on two case studies.Comment: Accepted for publication in IEEE Transactions on Reliabilit

    Quantitative Approximation of the Probability Distribution of a Markov Process by Formal Abstractions

    Full text link
    The goal of this work is to formally abstract a Markov process evolving in discrete time over a general state space as a finite-state Markov chain, with the objective of precisely approximating its state probability distribution in time, which allows for its approximate, faster computation by that of the Markov chain. The approach is based on formal abstractions and employs an arbitrary finite partition of the state space of the Markov process, and the computation of average transition probabilities between partition sets. The abstraction technique is formal, in that it comes with guarantees on the introduced approximation that depend on the diameters of the partitions: as such, they can be tuned at will. Further in the case of Markov processes with unbounded state spaces, a procedure for precisely truncating the state space within a compact set is provided, together with an error bound that depends on the asymptotic properties of the transition kernel of the original process. The overall abstraction algorithm, which practically hinges on piecewise constant approximations of the density functions of the Markov process, is extended to higher-order function approximations: these can lead to improved error bounds and associated lower computational requirements. The approach is practically tested to compute probabilistic invariance of the Markov process under study, and is compared to a known alternative approach from the literature.Comment: 29 pages, Journal of Logical Methods in Computer Scienc

    Formal analysis techniques for gossiping protocols

    Get PDF
    We give a survey of formal verification techniques that can be used to corroborate existing experimental results for gossiping protocols in a rigorous manner. We present properties of interest for gossiping protocols and discuss how various formal evaluation techniques can be employed to predict them

    Finite-State Abstractions for Probabilistic Computation Tree Logic

    No full text
    Probabilistic Computation Tree Logic (PCTL) is the established temporal logic for probabilistic verification of discrete-time Markov chains. Probabilistic model checking is a technique that verifies or refutes whether a property specified in this logic holds in a Markov chain. But Markov chains are often infinite or too large for this technique to apply. A standard solution to this problem is to convert the Markov chain to an abstract model and to model check that abstract model. The problem this thesis therefore studies is whether or when such finite abstractions of Markov chains for model checking PCTL exist. This thesis makes the following contributions. We identify a sizeable fragment of PCTL for which 3-valued Markov chains can serve as finite abstractions; this fragment is maximal for those abstractions and subsumes many practically relevant specifications including, e.g., reachability. We also develop game-theoretic foundations for the semantics of PCTL over Markov chains by capturing the standard PCTL semantics via a two-player games. These games, finally, inspire a notion of p-automata, which accept entire Markov chains. We show that p-automata subsume PCTL and Markov chains; that their languages of Markov chains have pleasant closure properties; and that the complexity of deciding acceptance matches that of probabilistic model checking for p-automata representing PCTL formulae. In addition, we offer a simulation between p-automata that under-approximates language containment. These results then allow us to show that p-automata comprise a solution to the problem studied in this thesis

    Probabilistic Guarantees for Safe Deep Reinforcement Learning

    Full text link
    Deep reinforcement learning has been successfully applied to many control tasks, but the application of such agents in safety-critical scenarios has been limited due to safety concerns. Rigorous testing of these controllers is challenging, particularly when they operate in probabilistic environments due to, for example, hardware faults or noisy sensors. We propose MOSAIC, an algorithm for measuring the safety of deep reinforcement learning agents in stochastic settings. Our approach is based on the iterative construction of a formal abstraction of a controller's execution in an environment, and leverages probabilistic model checking of Markov decision processes to produce probabilistic guarantees on safe behaviour over a finite time horizon. It produces bounds on the probability of safe operation of the controller for different initial configurations and identifies regions where correct behaviour can be guaranteed. We implement and evaluate our approach on agents trained for several benchmark control problems

    Applying Mean-field Approximation to Continuous Time Markov Chains

    Get PDF
    The mean-field analysis technique is used to perform analysis of a systems with a large number of components to determine the emergent deterministic behaviour and how this behaviour modifies when its parameters are perturbed. The computer science performance modelling and analysis community has found the mean-field method useful for modelling large-scale computer and communication networks. Applying mean-field analysis from the computer science perspective requires the following major steps: (1) describing how the agents populations evolve by means of a system of differential equations, (2) finding the emergent deterministic behaviour of the system by solving such differential equations, and (3) analysing properties of this behaviour either by relying on simulation or by using logics. Depending on the system under analysis, performing these steps may become challenging. Often, modifications of the general idea are needed. In this tutorial we consider illustrating examples to discuss how the mean-field method is used in different application areas. Starting from the application of the classical technique, moving to cases where additional steps have to be used, such as systems with local communication. Finally we illustrate the application of the simulation and uid model checking analysis techniques

    PrIC3: Property Directed Reachability for MDPs

    Get PDF
    IC3 has been a leap forward in symbolic model checking. This paper proposes PrIC3 (pronounced pricy-three), a conservative extension of IC3 to symbolic model checking of MDPs. Our main focus is to develop the theory underlying PrIC3. Alongside, we present a first implementation of PrIC3 including the key ingredients from IC3 such as generalization, repushing, and propagation
    • ā€¦
    corecore