849 research outputs found

    Parametric LTL on Markov Chains

    Full text link
    This paper is concerned with the verification of finite Markov chains against parametrized LTL (pLTL) formulas. In pLTL, the until-modality is equipped with a bound that contains variables; e.g., ā—Šā‰¤xĀ Ļ†\Diamond_{\le x}\ \varphi asserts that Ļ†\varphi holds within xx time steps, where xx is a variable on natural numbers. The central problem studied in this paper is to determine the set of parameter valuations Vā‰ŗp(Ļ†)V_{\prec p} (\varphi) for which the probability to satisfy pLTL-formula Ļ†\varphi in a Markov chain meets a given threshold ā‰ŗp\prec p, where ā‰ŗ\prec is a comparison on reals and pp a probability. As for pLTL determining the emptiness of V>0(Ļ†)V_{> 0}(\varphi) is undecidable, we consider several logic fragments. We consider parametric reachability properties, a sub-logic of pLTL restricted to next and ā—Šā‰¤x\Diamond_{\le x}, parametric B\"uchi properties and finally, a maximal subclass of pLTL for which emptiness of V>0(Ļ†)V_{> 0}(\varphi) is decidable.Comment: TCS Track B 201

    Multi-objective Robust Strategy Synthesis for Interval Markov Decision Processes

    Full text link
    Interval Markov decision processes (IMDPs) generalise classical MDPs by having interval-valued transition probabilities. They provide a powerful modelling tool for probabilistic systems with an additional variation or uncertainty that prevents the knowledge of the exact transition probabilities. In this paper, we consider the problem of multi-objective robust strategy synthesis for interval MDPs, where the aim is to find a robust strategy that guarantees the satisfaction of multiple properties at the same time in face of the transition probability uncertainty. We first show that this problem is PSPACE-hard. Then, we provide a value iteration-based decision algorithm to approximate the Pareto set of achievable points. We finally demonstrate the practical effectiveness of our proposed approaches by applying them on several case studies using a prototypical tool.Comment: This article is a full version of a paper accepted to the Conference on Quantitative Evaluation of SysTems (QEST) 201

    Probabilistic Reachability for Parametric Markov Models

    Get PDF
    Abstract. Given a parametric Markov model, we consider the problem of computing the formula expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression is computed. Afterwards, this expression is evaluated to a closed form expression representing the reachability probability. This paper investigates how this idea can be turned into an effective procedure. It turns out that the bottleneck lies in an exponential growth of the regular expression relative to the number of states. We therefore proceed differently, by tightly intertwining the regular expression computation with its evaluation. This allows us to arrive at an effective method that avoids the exponential blow up in most practical cases. We give a detailed account of the approach, also extending to parametric models with rewards and with non-determinism. Experimental evidence is provided, illustrating that our implementation provides meaningful insights on non-trivial models.

    Robust Control of Uncertain Markov Decision Processes with Temporal Logic Specifications

    Get PDF
    We present a method for designing robust controllers for dynamical systems with linear temporal logic specifications. We abstract the original system by a finite Markov Decision Process (MDP) that has transition probabilities in a specified uncertainty set. A robust control policy for the MDP is generated that maximizes the worst-case probability of satisfying the specification over all transition probabilities in the uncertainty set. To do this, we use a procedure from probabilistic model checking to combine the system model with an automaton representing the specification. This new MDP is then transformed into an equivalent form that satisfies assumptions for stochastic shortest path dynamic programming. A robust version of dynamic programming allows us to solve for a Ļµ\epsilon-suboptimal robust control policy with time complexity O(logā”1/Ļµ)O(\log 1/\epsilon) times that for the non-robust case. We then implement this control policy on the original dynamical system

    Qualitative Reachability for Open Interval Markov Chains

    Get PDF
    Interval Markov chains extend classical Markov chains with the possibility to describe transition probabilities using intervals, rather than exact values. While the standard formulation of interval Markov chains features closed intervals, previous work has considered also open interval Markov chains, in which the intervals can also be open or half-open. In this paper we focus on qualitative reachability problems for open interval Markov chains, which consider whether the optimal (maximum or minimum) probability with which a certain set of states can be reached is equal to 0 or 1. We present polynomial-time algorithms for these problems for both of the standard semantics of interval Markov chains. Our methods do not rely on the closure of open intervals, in contrast to previous approaches for open interval Markov chains, and can characterise situations in which probability 0 or 1 can be attained not exactly but arbitrarily closely.Comment: Full version of a paper published at RP 201

    Parameter Synthesis for Markov Models

    Full text link
    Markov chain analysis is a key technique in reliability engineering. A practical obstacle is that all probabilities in Markov models need to be known. However, system quantities such as failure rates or packet loss ratios, etc. are often not---or only partially---known. This motivates considering parametric models with transitions labeled with functions over parameters. Whereas traditional Markov chain analysis evaluates a reliability metric for a single, fixed set of probabilities, analysing parametric Markov models focuses on synthesising parameter values that establish a given reliability or performance specification Ļ†\varphi. Examples are: what component failure rates ensure the probability of a system breakdown to be below 0.00000001?, or which failure rates maximise reliability? This paper presents various analysis algorithms for parametric Markov chains and Markov decision processes. We focus on three problems: (a) do all parameter values within a given region satisfy Ļ†\varphi?, (b) which regions satisfy Ļ†\varphi and which ones do not?, and (c) an approximate version of (b) focusing on covering a large fraction of all possible parameter values. We give a detailed account of the various algorithms, present a software tool realising these techniques, and report on an extensive experimental evaluation on benchmarks that span a wide range of applications.Comment: 38 page
    • ā€¦
    corecore