3,942 research outputs found

    Using mathematical programming to solve Factored Markov Decision Processes with Imprecise Probabilities

    Get PDF
    AbstractThis paper investigates Factored Markov Decision Processes with Imprecise Probabilities (MDPIPs); that is, Factored Markov Decision Processes (MDPs) where transition probabilities are imprecisely specified. We derive efficient approximate solutions for Factored MDPIPs based on mathematical programming. To do this, we extend previous linear programming approaches for linear approximations in Factored MDPs, resulting in a multilinear formulation for robust “maximin” linear approximations in Factored MDPIPs. By exploiting the factored structure in MDPIPs we are able to demonstrate orders of magnitude reduction in solution time over standard exact non-factored approaches, in exchange for relatively low approximation errors, on a difficult class of benchmark problems with millions of states

    Model checking for imprecise Markov chains.

    Get PDF
    We extend probabilistic computational tree logic for expressing properties of Markov chains to imprecise Markov chains, and provide an efficient algorithm for model checking of imprecise Markov chains. Thereby, we provide a formal framework to answer a very wide range of questions about imprecise Markov chains, in a systematic and computationally efficient way

    Hitting times and probabilities for imprecise Markov chains

    Get PDF
    We consider the problem of characterising expected hitting times and hitting probabilities for imprecise Markov chains. To this end, we consider three distinct ways in which imprecise Markov chains have been defined in the literature: as sets of homogeneous Markov chains, as sets of more general stochastic processes, and as game-theoretic probability models. Our first contribution is that all these different types of imprecise Markov chains have the same lower and upper expected hitting times, and similarly the hitting probabilities are the same for these three types. Moreover, we provide a characterisation of these quantities that directly generalises a similar characterisation for precise, homogeneous Markov chains

    Abstractions of stochastic hybrid systems

    Get PDF
    Many control systems have large, infinite state space that can not be easily abstracted. One method to analyse and verify these systems is reachability analysis. It is frequently used for air traffic control and power plants. Because of lack of complete information about the environment or unpredicted changes, the stochastic approach is a viable alternative. In this paper, different ways of introducing rechability under uncertainty are presented. A new concept of stochastic bisimulation is introduced and its connection with the reachability analysis is established. The work is mainly motivated by safety critical situations in air traffic control (like collision detection and avoidance) and formal tools are based on stochastic analysis

    A statistical inference method for the stochastic reachability analysis.

    Get PDF
    The main contribution of this paper is the characterization of reachability problem associated to stochastic hybrid systems in terms of imprecise probabilities. This provides the connection between reachability problem and Bayesian statistics. Using generalised Bayesian statistical inference, a new concept of conditional reach set probabilities is defined. Then possible algorithms to compute the reach set probabilities are derived

    Hitting Times and Probabilities for Imprecise Markov Chains

    Get PDF
    We consider the problem of characterising expected hitting times and hitting probabilities for imprecise Markov chains. To this end, we consider three distinct ways in which imprecise Markov chains have been defined in the literature: as sets of homogeneous Markov chains, as sets of more general stochastic processes, and as game-theoretic probability models. Our first contribution is that all these different types of imprecise Markov chains have the same lower and upper expected hitting times, and similarly the hitting probabilities are the same for these three types. Moreover, we provide a characterisation of these quantities that directly generalises a similar characterisation for precise, homogeneous Markov chains

    Verification of Uncertain POMDPs Using Barrier Certificates

    Full text link
    We consider a class of partially observable Markov decision processes (POMDPs) with uncertain transition and/or observation probabilities. The uncertainty takes the form of probability intervals. Such uncertain POMDPs can be used, for example, to model autonomous agents with sensors with limited accuracy, or agents undergoing a sudden component failure, or structural damage [1]. Given an uncertain POMDP representation of the autonomous agent, our goal is to propose a method for checking whether the system will satisfy an optimal performance, while not violating a safety requirement (e.g. fuel level, velocity, and etc.). To this end, we cast the POMDP problem into a switched system scenario. We then take advantage of this switched system characterization and propose a method based on barrier certificates for optimality and/or safety verification. We then show that the verification task can be carried out computationally by sum-of-squares programming. We illustrate the efficacy of our method by applying it to a Mars rover exploration example.Comment: 8 pages, 4 figure

    Recent advances in imprecise-probabilistic graphical models

    Get PDF
    We summarise and provide pointers to recent advances in inference and identification for specific types of probabilistic graphical models using imprecise probabilities. Robust inferences can be made in so-called credal networks when the local models attached to their nodes are imprecisely specified as conditional lower previsions, by using exact algorithms whose complexity is comparable to that for the precise-probabilistic counterparts
    • …
    corecore