8 research outputs found

    Probabilistic Bisimulations for PCTL Model Checking of Interval MDPs

    Full text link
    Verification of PCTL properties of MDPs with convex uncertainties has been investigated recently by Puggelli et al. However, model checking algorithms typically suffer from state space explosion. In this paper, we address probabilistic bisimulation to reduce the size of such an MDPs while preserving PCTL properties it satisfies. We discuss different interpretations of uncertainty in the models which are studied in the literature and that result in two different definitions of bisimulations. We give algorithms to compute the quotients of these bisimulations in time polynomial in the size of the model and exponential in the uncertain branching. Finally, we show by a case study that large models in practice can have small branching and that a substantial state space reduction can be achieved by our approach.Comment: In Proceedings SynCoP 2014, arXiv:1403.784

    Robust Control of Uncertain Markov Decision Processes with Temporal Logic Specifications

    Get PDF
    We present a method for designing robust controllers for dynamical systems with linear temporal logic specifications. We abstract the original system by a finite Markov Decision Process (MDP) that has transition probabilities in a specified uncertainty set. A robust control policy for the MDP is generated that maximizes the worst-case probability of satisfying the specification over all transition probabilities in the uncertainty set. To do this, we use a procedure from probabilistic model checking to combine the system model with an automaton representing the specification. This new MDP is then transformed into an equivalent form that satisfies assumptions for stochastic shortest path dynamic programming. A robust version of dynamic programming allows us to solve for a Ļµ\epsilon-suboptimal robust control policy with time complexity O(logā”1/Ļµ)O(\log 1/\epsilon) times that for the non-robust case. We then implement this control policy on the original dynamical system

    Multi-objective Robust Strategy Synthesis for Interval Markov Decision Processes

    Full text link
    Interval Markov decision processes (IMDPs) generalise classical MDPs by having interval-valued transition probabilities. They provide a powerful modelling tool for probabilistic systems with an additional variation or uncertainty that prevents the knowledge of the exact transition probabilities. In this paper, we consider the problem of multi-objective robust strategy synthesis for interval MDPs, where the aim is to find a robust strategy that guarantees the satisfaction of multiple properties at the same time in face of the transition probability uncertainty. We first show that this problem is PSPACE-hard. Then, we provide a value iteration-based decision algorithm to approximate the Pareto set of achievable points. We finally demonstrate the practical effectiveness of our proposed approaches by applying them on several case studies using a prototypical tool.Comment: This article is a full version of a paper accepted to the Conference on Quantitative Evaluation of SysTems (QEST) 201

    Efficiency through Uncertainty: Scalable Formal Synthesis for Stochastic Hybrid Systems

    Full text link
    This work targets the development of an efficient abstraction method for formal analysis and control synthesis of discrete-time stochastic hybrid systems (SHS) with linear dynamics. The focus is on temporal logic specifications, both over finite and infinite time horizons. The framework constructs a finite abstraction as a class of uncertain Markov models known as interval Markov decision process (IMDP). Then, a strategy that maximizes the satisfaction probability of the given specification is synthesized over the IMDP and mapped to the underlying SHS. In contrast to existing formal approaches, which are by and large limited to finite-time properties and rely on conservative over-approximations, we show that the exact abstraction error can be computed as a solution of convex optimization problems and can be embedded into the IMDP abstraction. This is later used in the synthesis step over both finite- and infinite-horizon specifications, mitigating the known state-space explosion problem. Our experimental validation of the new approach compared to existing abstraction-based approaches shows: (i) significant (orders of magnitude) reduction of the abstraction error; (ii) marked speed-ups; and (iii) boosted scalability, allowing in particular to verify models with more than 10 continuous variables

    Reachability analysis of uncertain systems using bounded-parameter Markov decision processes

    Get PDF
    Verification of reachability properties for probabilistic systems is usually based on variants of Markov processes. Current methods assume an exact model of the dynamic behavior and are not suitable for realistic systems that operate in the presence of uncertainty and variability. This research note extends existing methods for Bounded-parameter Markov Decision Processes (BMDPs) to solve the reachability problem. BMDPs are a generalization of MDPs that allows modeling uncertainty. Our results show that interval value iteration converges in the case of an undiscounted reward criterion that is required to formulate the problems of maximizing the probability of reaching a set of desirable states or minimizing the probability of reaching an unsafe set. Analysis of the computational complexity is also presented

    Verification and synthesis for stochastic systems with temporal logic specifications

    Get PDF
    The objective of this thesis is to first provide a formal framework for the verification of discrete-time, continuous-space stochastic systems with complex temporal specifications. Secondly, the approach developed for verification is extended to the synthesis of controllers that aim to maximize or minimize the probability of occurrence of temporal behaviors in stochastic systems. As these problems are generally undecidable or intractable to solve, approximation methods are employed in the form of finite-state abstractions arising from a partition of the original systemā€™s domain for which analysis is greatly simplified. The abstractions of choice in this work are Interval-valued Markov Chains (IMC) which, unlike conventional discrete-time Markov Chains, allow for a non-deterministic range of probabilities of transition between states instead of a fixed probability. Techniques for constructing IMC abstractions for two classes of systems are presented. Due to their inherent structure that facilitates estimations of reachable sets, mixed monotone systems with additive disturbances are shown to be efficiently amenable to IMC abstractions. Then, an abstraction procedure for polynomial systems that uses stochastic barrier functions computed via Sum-of-Squares programming is derived. Next, an algorithm for computing satisfaction bounds in IMCs with respect to so-called omega-regular properties is detailed. As probabilistic specifications require finding the set of initial states whose probability of fulfilling some behavior is below or above a certain threshold, this method may yield a set of states whose satisfaction status is undecided. An iterative specification-guided partition refinement method is proposed to reduce conservatism in the abstraction until a precision threshold is met. Finally, similar interval-based finite abstractions are utilized to synthesize control policies for omega-regular objectives in systems with both a finite number of modes and a continuous set of available inputs. A notion of optimality for these policies is introduced and a partition refinement scheme is presented to reach a desired level of optimality.Ph.D
    corecore