1,483 research outputs found

    Different Approaches on Stochastic Reachability as an Optimal Stopping Problem

    Get PDF
    Reachability analysis is the core of model checking of time systems. For stochastic hybrid systems, this safety verification method is very little supported mainly because of complexity and difficulty of the associated mathematical problems. In this paper, we develop two main directions of studying stochastic reachability as an optimal stopping problem. The first approach studies the hypotheses for the dynamic programming corresponding with the optimal stopping problem for stochastic hybrid systems. In the second approach, we investigate the reachability problem considering approximations of stochastic hybrid systems. The main difficulty arises when we have to prove the convergence of the value functions of the approximating processes to the value function of the initial process. An original proof is provided

    Multi-objective Robust Strategy Synthesis for Interval Markov Decision Processes

    Full text link
    Interval Markov decision processes (IMDPs) generalise classical MDPs by having interval-valued transition probabilities. They provide a powerful modelling tool for probabilistic systems with an additional variation or uncertainty that prevents the knowledge of the exact transition probabilities. In this paper, we consider the problem of multi-objective robust strategy synthesis for interval MDPs, where the aim is to find a robust strategy that guarantees the satisfaction of multiple properties at the same time in face of the transition probability uncertainty. We first show that this problem is PSPACE-hard. Then, we provide a value iteration-based decision algorithm to approximate the Pareto set of achievable points. We finally demonstrate the practical effectiveness of our proposed approaches by applying them on several case studies using a prototypical tool.Comment: This article is a full version of a paper accepted to the Conference on Quantitative Evaluation of SysTems (QEST) 201

    Transient Reward Approximation for Continuous-Time Markov Chains

    Full text link
    We are interested in the analysis of very large continuous-time Markov chains (CTMCs) with many distinct rates. Such models arise naturally in the context of reliability analysis, e.g., of computer network performability analysis, of power grids, of computer virus vulnerability, and in the study of crowd dynamics. We use abstraction techniques together with novel algorithms for the computation of bounds on the expected final and accumulated rewards in continuous-time Markov decision processes (CTMDPs). These ingredients are combined in a partly symbolic and partly explicit (symblicit) analysis approach. In particular, we circumvent the use of multi-terminal decision diagrams, because the latter do not work well if facing a large number of different rates. We demonstrate the practical applicability and efficiency of the approach on two case studies.Comment: Accepted for publication in IEEE Transactions on Reliabilit

    New insights on stochastic reachability

    Get PDF
    In this paper, we give new characterizations of the stochastic reachability problem for stochastic hybrid systems in the language of different theories that can be employed in studying stochastic processes (Markov processes, potential theory, optimal control). These characterizations are further used to obtain the probabilities involved in the context of stochastic reachability as viscosity solutions of some variational inequalities

    Explicit Model Checking of Very Large MDP using Partitioning and Secondary Storage

    Full text link
    The applicability of model checking is hindered by the state space explosion problem in combination with limited amounts of main memory. To extend its reach, the large available capacities of secondary storage such as hard disks can be exploited. Due to the specific performance characteristics of secondary storage technologies, specialised algorithms are required. In this paper, we present a technique to use secondary storage for probabilistic model checking of Markov decision processes. It combines state space exploration based on partitioning with a block-iterative variant of value iteration over the same partitions for the analysis of probabilistic reachability and expected-reward properties. A sparse matrix-like representation is used to store partitions on secondary storage in a compact format. All file accesses are sequential, and compression can be used without affecting runtime. The technique has been implemented within the Modest Toolset. We evaluate its performance on several benchmark models of up to 3.5 billion states. In the analysis of time-bounded properties on real-time models, our method neutralises the state space explosion induced by the time bound in its entirety.Comment: The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-24953-7_1
    • ā€¦
    corecore