118 research outputs found

    Symbolic Magnifying Lens Abstraction in Markov Decision Processes

    Get PDF
    In this paper, we combine abstraction-refinement and symbolic techniques to fight the state-space explosion problem when model checking Markov decision processes (MDPs). The abstract-refinement technique, called "magnifying-lens abstraction" (MLA), partitions the state-space into regions and computes upper and lower bounds for reachability and safety properties on the regions, rather than the states. To compute such bounds, MLA iterates over the regions, analyzing the concrete states of each region in turn - as if one was sliding a magnifying lens across the system to view the states. The algorithm adaptively refines the regions, using smaller regions where more detail is required, until the difference between the bounds is below a specified accuracy. The symbolic technique is based on multi-terminal binary decision diagrams (MTBDDs) which have been used extensively to provide compact encodings of probabilistic models. We introduce a symbolic version of the MLA algorithm, called "symbolic MLA", which combines the power of both practical techniques when verifying MDPs. An implementation of symbolic MLA in the probabilistic model checker PRISM and experimental results to illustrate the advantages of our approach are presented

    Magnifying Lens Abstraction for Stochastic Games with Discounted and Long-run Average Objectives

    Full text link
    Turn-based stochastic games and its important subclass Markov decision processes (MDPs) provide models for systems with both probabilistic and nondeterministic behaviors. We consider turn-based stochastic games with two classical quantitative objectives: discounted-sum and long-run average objectives. The game models and the quantitative objectives are widely used in probabilistic verification, planning, optimal inventory control, network protocol and performance analysis. Games and MDPs that model realistic systems often have very large state spaces, and probabilistic abstraction techniques are necessary to handle the state-space explosion. The commonly used full-abstraction techniques do not yield space-savings for systems that have many states with similar value, but does not necessarily have similar transition structure. A semi-abstraction technique, namely Magnifying-lens abstractions (MLA), that clusters states based on value only, disregarding differences in their transition relation was proposed for qualitative objectives (reachability and safety objectives). In this paper we extend the MLA technique to solve stochastic games with discounted-sum and long-run average objectives. We present the MLA technique based abstraction-refinement algorithm for stochastic games and MDPs with discounted-sum objectives. For long-run average objectives, our solution works for all MDPs and a sub-class of stochastic games where every state has the same value

    Transient Reward Approximation for Continuous-Time Markov Chains

    Full text link
    We are interested in the analysis of very large continuous-time Markov chains (CTMCs) with many distinct rates. Such models arise naturally in the context of reliability analysis, e.g., of computer network performability analysis, of power grids, of computer virus vulnerability, and in the study of crowd dynamics. We use abstraction techniques together with novel algorithms for the computation of bounds on the expected final and accumulated rewards in continuous-time Markov decision processes (CTMDPs). These ingredients are combined in a partly symbolic and partly explicit (symblicit) analysis approach. In particular, we circumvent the use of multi-terminal decision diagrams, because the latter do not work well if facing a large number of different rates. We demonstrate the practical applicability and efficiency of the approach on two case studies.Comment: Accepted for publication in IEEE Transactions on Reliabilit

    A linear process algebraic format for probabilistic systems with data

    Get PDF
    This paper presents a novel linear process algebraic format for probabilistic automata. The key ingredient is a symbolic transformation of probabilistic process algebra terms that incorporate data into this linear format while preserving strong probabilistic bisimulation. This generalises similar techniques for traditional process algebras with data, and ā€” more importantly ā€” treats data and data-dependent probabilistic choice in a fully symbolic manner, paving the way to the symbolic analysis of parameterised probabilistic systems

    From Small-Gain Theory to Compositional Construction of Barrier Certificates for Large-Scale Stochastic Systems

    Full text link
    This paper is concerned with a compositional approach for the construction of control barrier certificates for large-scale interconnected stochastic systems while synthesizing hybrid controllers against high-level logic properties. Our proposed methodology involves decomposition of interconnected systems into smaller subsystems and leverages the notion of control sub-barrier certificates of subsystems, enabling one to construct control barrier certificates of interconnected systems by employing some maxā”\max-type small-gain conditions. The main goal is to synthesize hybrid controllers enforcing complex logic properties including the ones represented by the accepting language of deterministic finite automata (DFA), while providing probabilistic guarantees on the satisfaction of given specifications in bounded-time horizons. To do so, we propose a systematic approach to first decompose high-level specifications into simple reachability tasks by utilizing automata corresponding to the complement of specifications. We then construct control sub-barrier certificates and synthesize local controllers for those simpler tasks and combine them to obtain a hybrid controller that ensures satisfaction of the complex specification with some lower-bound on the probability of satisfaction. To compute control sub-barrier certificates and corresponding local controllers, we provide two systematic approaches based on sum-of-squares (SOS) optimization program and counter-example guided inductive synthesis (CEGIS) framework. We finally apply our proposed techniques to two physical case studies

    Symbolic Algorithms for Qualitative Analysis of Markov Decision Processes with B\"uchi Objectives

    Full text link
    We consider Markov decision processes (MDPs) with \omega-regular specifications given as parity objectives. We consider the problem of computing the set of almost-sure winning states from where the objective can be ensured with probability 1. The algorithms for the computation of the almost-sure winning set for parity objectives iteratively use the solutions for the almost-sure winning set for B\"uchi objectives (a special case of parity objectives). Our contributions are as follows: First, we present the first subquadratic symbolic algorithm to compute the almost-sure winning set for MDPs with B\"uchi objectives; our algorithm takes O(n \sqrt{m}) symbolic steps as compared to the previous known algorithm that takes O(n^2) symbolic steps, where nn is the number of states and mm is the number of edges of the MDP. In practice MDPs have constant out-degree, and then our symbolic algorithm takes O(n \sqrt{n}) symbolic steps, as compared to the previous known O(n2)O(n^2) symbolic steps algorithm. Second, we present a new algorithm, namely win-lose algorithm, with the following two properties: (a) the algorithm iteratively computes subsets of the almost-sure winning set and its complement, as compared to all previous algorithms that discover the almost-sure winning set upon termination; and (b) requires O(n \sqrt{K}) symbolic steps, where K is the maximal number of edges of strongly connected components (scc's) of the MDP. The win-lose algorithm requires symbolic computation of scc's. Third, we improve the algorithm for symbolic scc computation; the previous known algorithm takes linear symbolic steps, and our new algorithm improves the constants associated with the linear number of steps. In the worst case the previous known algorithm takes 5n symbolic steps, whereas our new algorithm takes 4n symbolic steps
    • ā€¦
    corecore