94 research outputs found

    Faster Algorithms for Quantitative Analysis of Markov Chains and Markov Decision Processes with Small Treewidth

    Full text link
    Discrete-time Markov Chains (MCs) and Markov Decision Processes (MDPs) are two standard formalisms in system analysis. Their main associated quantitative objectives are hitting probabilities, discounted sum, and mean payoff. Although there are many techniques for computing these objectives in general MCs/MDPs, they have not been thoroughly studied in terms of parameterized algorithms, particularly when treewidth is used as the parameter. This is in sharp contrast to qualitative objectives for MCs, MDPs and graph games, for which treewidth-based algorithms yield significant complexity improvements. In this work, we show that treewidth can also be used to obtain faster algorithms for the quantitative problems. For an MC with nn states and mm transitions, we show that each of the classical quantitative objectives can be computed in O((n+m)⋅t2)O((n+m)\cdot t^2) time, given a tree decomposition of the MC that has width tt. Our results also imply a bound of O(κ⋅(n+m)⋅t2)O(\kappa\cdot (n+m)\cdot t^2) for each objective on MDPs, where κ\kappa is the number of strategy-iteration refinements required for the given input and objective. Finally, we make an experimental evaluation of our new algorithms on low-treewidth MCs and MDPs obtained from the DaCapo benchmark suite. Our experimental results show that on MCs and MDPs with small treewidth, our algorithms outperform existing well-established methods by one or more orders of magnitude

    LNCS

    Get PDF
    Discrete-time Markov Chains (MCs) and Markov Decision Processes (MDPs) are two standard formalisms in system analysis. Their main associated quantitative objectives are hitting probabilities, discounted sum, and mean payoff. Although there are many techniques for computing these objectives in general MCs/MDPs, they have not been thoroughly studied in terms of parameterized algorithms, particularly when treewidth is used as the parameter. This is in sharp contrast to qualitative objectives for MCs, MDPs and graph games, for which treewidth-based algorithms yield significant complexity improvements. In this work, we show that treewidth can also be used to obtain faster algorithms for the quantitative problems. For an MC with n states and m transitions, we show that each of the classical quantitative objectives can be computed in O((n+m)⋅t2) time, given a tree decomposition of the MC with width t. Our results also imply a bound of O(κ⋅(n+m)⋅t2) for each objective on MDPs, where κ is the number of strategy-iteration refinements required for the given input and objective. Finally, we make an experimental evaluation of our new algorithms on low-treewidth MCs and MDPs obtained from the DaCapo benchmark suite. Our experiments show that on low-treewidth MCs and MDPs, our algorithms outperform existing well-established methods by one or more orders of magnitude

    LNCS

    Get PDF
    The notion of treewidth of graphs has been exploited for faster algorithms for several problems arising in verification and program analysis. Moreover, various notions of balanced tree decompositions have been used for improved algorithms supporting dynamic updates and analysis of concurrent programs. In this work, we present a tool for constructing tree-decompositions of CFGs obtained from Java methods, which is implemented as an extension to the widely used Soot framework. The experimental results show that our implementation on real-world Java benchmarks is very efficient. Our tool also provides the first implementation for balancing tree-decompositions. In summary, we present the first tool support for exploiting treewidth in the static analysis problems on Java programs

    Value Iteration for Long-run Average Reward in Markov Decision Processes

    Full text link
    Markov decision processes (MDPs) are standard models for probabilistic systems with non-deterministic behaviours. Long-run average rewards provide a mathematically elegant formalism for expressing long term performance. Value iteration (VI) is one of the simplest and most efficient algorithmic approaches to MDPs with other properties, such as reachability objectives. Unfortunately, a naive extension of VI does not work for MDPs with long-run average rewards, as there is no known stopping criterion. In this work our contributions are threefold. (1) We refute a conjecture related to stopping criteria for MDPs with long-run average rewards. (2) We present two practical algorithms for MDPs with long-run average rewards based on VI. First, we show that a combination of applying VI locally for each maximal end-component (MEC) and VI for reachability objectives can provide approximation guarantees. Second, extending the above approach with a simulation-guided on-demand variant of VI, we present an anytime algorithm that is able to deal with very large models. (3) Finally, we present experimental results showing that our methods significantly outperform the standard approaches on several benchmarks

    Algorithms and Hardness Results for Computing Cores of Markov Chains

    Get PDF
    Given a Markov chain M = (V, v_0, δ), with state space V and a starting state v_0, and a probability threshold ε, an ε-core is a subset C of states that is left with probability at most ε. More formally, C ⊆ V is an ε-core, iff ℙ[reach (V\C)] ≤ ε. Cores have been applied in a wide variety of verification problems over Markov chains, Markov decision processes, and probabilistic programs, as a means of discarding uninteresting and low-probability parts of a probabilistic system and instead being able to focus on the states that are likely to be encountered in a real-world run. In this work, we focus on the problem of computing a minimal ε-core in a Markov chain. Our contributions include both negative and positive results: (i) We show that the decision problem on the existence of an ε-core of a given size is NP-complete. This solves an open problem posed in [Jan Kretínský and Tobias Meggendorfer, 2020]. We additionally show that the problem remains NP-complete even when limited to acyclic Markov chains with bounded maximal vertex degree; (ii) We provide a polynomial time algorithm for computing a minimal ε-core on Markov chains over control-flow graphs of structured programs. A straightforward combination of our algorithm with standard branch prediction techniques allows one to apply the idea of cores to find a subset of program lines that are left with low probability and then focus any desired static analysis on this core subset

    Certificates and Witnesses for Probabilistic Model Checking

    Get PDF
    The ability to provide succinct information about why a property does, or does not, hold in a given system is a key feature in the context of formal verification and model checking. It can be used both to explain the behavior of the system to a user of verification software, and as a tool to aid automated abstraction and synthesis procedures. Counterexample traces, which are executions of the system that do not satisfy the desired specification, are a classical example. Specifications of systems with probabilistic behavior usually require that an event happens with sufficiently high (or low) probability. In general, single executions of the system are not enough to demonstrate that such a specification holds. Rather, standard witnesses in this setting are sets of executions which in sum exceed the required probability bound. In this thesis we consider methods to certify and witness that probabilistic reachability constraints hold in Markov decision processes (MDPs) and probabilistic timed automata (PTA). Probabilistic reachability constraints are threshold conditions on the maximal or minimal probability of reaching a set of target-states in the system. The threshold condition may represent an upper or lower bound and be strict or non-strict. We show that the model-checking problem for each type of constraint can be formulated as a satisfiability problem of a system of linear inequalities. These inequalities correspond closely to the probabilistic transition matrix of the MDP. Solutions of the inequalities are called Farkas certificates for the corresponding property, as they can indeed be used to easily validate that the property holds. By themselves, Farkas certificates do not explain why the corresponding probabilistic reachability constraint holds in the considered MDP. To demonstrate that the maximal reachability probability in an MDP is above a certain threshold, a commonly used notion are witnessing subsystems. A subsystem is a witness if the MDP satisfies the lower bound on the optimal reachability probability even if all states not included in the subsystem are made rejecting trap states. Hence, a subsystem is a part of the MDP which by itself satisfies the lower-bounded threshold constraint on the optimal probability of reaching the target-states. We consider witnessing subsystems for lower bounds on both the maximal and minimal reachability probabilities, and show that Farkas certificates and witnessing subsystems are related. More precisely, the support (i.e., the indices with a non-zero entry) of a Farkas certificate induces the state-space of a witnessing subsystem for the corresponding property. Vice versa, given a witnessing subsystem one can compute a Farkas certificate whose support corresponds to the state-space of the witness. This insight yields novel algorithms and heuristics to compute small and minimal witnessing subsystems. To compute minimal witnesses, we propose mixed-integer linear programming formulations whose solutions are Farkas certificates with minimal support. We show that the corresponding decision problem is NP-complete even for acyclic Markov chains, which supports the use of integer programs to solve it. As this approach does not scale well to large instances, we introduce the quotient-sum heuristic, which is based on iteratively solving a sequence of linear programs. The solutions of these linear programs are also Farkas certificates. In an experimental evaluation we show that the quotient-sum heuristic is competitive with state-of-the-art methods. A large part of the algorithms proposed in this thesis are implemented in the tool SWITSS. We study the complexity of computing minimal witnessing subsystems for probabilistic systems that are similar to trees or paths. Formally, this is captured by the notions of tree width and path width. Our main result here is that the problem of computing minimal witnessing subsystems remains NP-complete even for Markov chains with bounded path width. The hardness proof identifies a new source of combinatorial hardness in the corresponding decision problem. Probabilistic timed automata generalize MDPs by including a set of clocks whose values determine which transitions are enabled. They are widely used to model and verify real-time systems. Due to the continuously-valued clocks, their underlying state-space is inherently uncountable. Hence, the methods that we describe for finite-state MDPs do not carry over directly to PTA. Furthermore, a good notion of witness for PTA should also take into account timing aspects. We define two kinds of subsystems for PTA, one for maximal and one for minimal reachability probabilities, respectively. As for MDPs, a subsystem of a PTA is called a witness for a lower-bounded constraint on the (maximal or minimal) reachability probability, if it itself satisfies this constraint. Then, we show that witnessing subsystems of PTA induce Farkas certificates in certain finite-state quotients of the PTA. Vice versa, Farkas certificates of such a quotient induce witnesses of the PTA. Again, the support of the Farkas certificates corresponds to the states included in the subsystem. These insights are used to describe algorithms for the computation of minimal witnessing subsystems for PTA, with respect to three different notions of size. One of them counts the number of locations in the subsystem, while the other two take into account the possible clock valuations in the subsystem.:1 Introduction 2 Preliminaries 3 Farkas certificates 4 New techniques for witnessing subsystems 5 Probabilistic systems with low tree width 6 Explications for probabilistic timed automata 7 Conclusio

    Unifying Two Views on Multiple Mean-Payoff Objectives in Markov Decision Processes

    Get PDF
    We consider Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) objectives. There exist two different views: (i) the expectation semantics, where the goal is to optimize the expected mean-payoff objective, and (ii) the satisfaction semantics, where the goal is to maximize the probability of runs such that the mean-payoff value stays above a given vector. We consider optimization with respect to both objectives at once, thus unifying the existing semantics. Precisely, the goal is to optimize the expectation while ensuring the satisfaction constraint. Our problem captures the notion of optimization with respect to strategies that are risk-averse (i.e., ensure certain probabilistic guarantee). Our main results are as follows: First, we present algorithms for the decision problems which are always polynomial in the size of the MDP. We also show that an approximation of the Pareto-curve can be computed in time polynomial in the size of the MDP, and the approximation factor, but exponential in the number of dimensions. Second, we present a complete characterization of the strategy complexity (in terms of memory bounds and randomization) required to solve our problem.Comment: Extended journal version of the LICS'15 pape

    IST Austria Thesis

    Get PDF
    This dissertation concerns the automatic verification of probabilistic systems and programs with arrays by statistical and logical methods. Although statistical and logical methods are different in nature, we show that they can be successfully combined for system analysis. In the first part of the dissertation we present a new statistical algorithm for the verification of probabilistic systems with respect to unbounded properties, including linear temporal logic. Our algorithm often performs faster than the previous approaches, and at the same time requires less information about the system. In addition, our method can be generalized to unbounded quantitative properties such as mean-payoff bounds. In the second part, we introduce two techniques for comparing probabilistic systems. Probabilistic systems are typically compared using the notion of equivalence, which requires the systems to have the equal probability of all behaviors. However, this notion is often too strict, since probabilities are typically only empirically estimated, and any imprecision may break the relation between processes. On the one hand, we propose to replace the Boolean notion of equivalence by a quantitative distance of similarity. For this purpose, we introduce a statistical framework for estimating distances between Markov chains based on their simulation runs, and we investigate which distances can be approximated in our framework. On the other hand, we propose to compare systems with respect to a new qualitative logic, which expresses that behaviors occur with probability one or a positive probability. This qualitative analysis is robust with respect to modeling errors and applicable to many domains. In the last part, we present a new quantifier-free logic for integer arrays, which allows us to express counting. Counting properties are prevalent in array-manipulating programs, however they cannot be expressed in the quantified fragments of the theory of arrays. We present a decision procedure for our logic, and provide several complexity results
    • …
    corecore