76 research outputs found
Probabilistic Bisimulations for PCTL Model Checking of Interval MDPs
Verification of PCTL properties of MDPs with convex uncertainties has been
investigated recently by Puggelli et al. However, model checking algorithms
typically suffer from state space explosion. In this paper, we address
probabilistic bisimulation to reduce the size of such an MDPs while preserving
PCTL properties it satisfies. We discuss different interpretations of
uncertainty in the models which are studied in the literature and that result
in two different definitions of bisimulations. We give algorithms to compute
the quotients of these bisimulations in time polynomial in the size of the
model and exponential in the uncertain branching. Finally, we show by a case
study that large models in practice can have small branching and that a
substantial state space reduction can be achieved by our approach.Comment: In Proceedings SynCoP 2014, arXiv:1403.784
Algorithms for Game Metrics
Simulation and bisimulation metrics for stochastic systems provide a
quantitative generalization of the classical simulation and bisimulation
relations. These metrics capture the similarity of states with respect to
quantitative specifications written in the quantitative {\mu}-calculus and
related probabilistic logics. We first show that the metrics provide a bound
for the difference in long-run average and discounted average behavior across
states, indicating that the metrics can be used both in system verification,
and in performance evaluation. For turn-based games and MDPs, we provide a
polynomial-time algorithm for the computation of the one-step metric distance
between states. The algorithm is based on linear programming; it improves on
the previous known exponential-time algorithm based on a reduction to the
theory of reals. We then present PSPACE algorithms for both the decision
problem and the problem of approximating the metric distance between two
states, matching the best known algorithms for Markov chains. For the
bisimulation kernel of the metric our algorithm works in time O(n^4) for both
turn-based games and MDPs; improving the previously best known O(n^9\cdot
log(n)) time algorithm for MDPs. For a concurrent game G, we show that
computing the exact distance between states is at least as hard as computing
the value of concurrent reachability games and the square-root-sum problem in
computational geometry. We show that checking whether the metric distance is
bounded by a rational r, can be done via a reduction to the theory of real
closed fields, involving a formula with three quantifier alternations, yielding
O(|G|^O(|G|^5)) time complexity, improving the previously known reduction,
which yielded O(|G|^O(|G|^7)) time complexity. These algorithms can be iterated
to approximate the metrics using binary search.Comment: 27 pages. Full version of the paper accepted at FSTTCS 200
Game Refinement Relations and Metrics
We consider two-player games played over finite state spaces for an infinite
number of rounds. At each state, the players simultaneously choose moves; the
moves determine a successor state. It is often advantageous for players to
choose probability distributions over moves, rather than single moves. Given a
goal, for example, reach a target state, the question of winning is thus a
probabilistic one: what is the maximal probability of winning from a given
state?
On these game structures, two fundamental notions are those of equivalences
and metrics. Given a set of winning conditions, two states are equivalent if
the players can win the same games with the same probability from both states.
Metrics provide a bound on the difference in the probabilities of winning
across states, capturing a quantitative notion of state similarity.
We introduce equivalences and metrics for two-player game structures, and we
show that they characterize the difference in probability of winning games
whose goals are expressed in the quantitative mu-calculus. The quantitative
mu-calculus can express a large set of goals, including reachability, safety,
and omega-regular properties. Thus, we claim that our relations and metrics
provide the canonical extensions to games, of the classical notion of
bisimulation for transition systems. We develop our results both for
equivalences and metrics, which generalize bisimulation, and for asymmetrical
versions, which generalize simulation
Bisimulations and Logical Characterizations on Continuous-time Markov Decision Processes
In this paper we study strong and weak bisimulation equivalences for
continuous-time Markov decision processes (CTMDPs) and the logical
characterizations of these relations with respect to the continuous-time
stochastic logic (CSL). For strong bisimulation, it is well known that it is
strictly finer than CSL equivalence. In this paper we propose strong and weak
bisimulations for CTMDPs and show that for a subclass of CTMDPs, strong and
weak bisimulations are both sound and complete with respect to the equivalences
induced by CSL and the sub-logic of CSL without next operator respectively. We
then consider a standard extension of CSL, and show that it and its sub-logic
without X can be fully characterized by strong and weak bisimulations
respectively over arbitrary CTMDPs.Comment: The conference version of this paper was published at VMCAI 201
Decision algorithms for modelling, optimal control and verification of probabilistic systems
Markov Decision Processes (MDPs) constitute a mathematical framework for modelling systems featuring both probabilistic and nondeterministic behaviour. They are widely used to solve sequential decision making problems and applied successfully in operations research, arti?cial intelligence, and stochastic control theory, and have been extended conservatively to the model of probabilistic automata in the context of concurrent probabilistic systems. However, when modeling a physical system they suffer from several limitations. One of the most important is the inherent loss of precision that is introduced by measurement errors and discretization artifacts which necessarily happen due to incomplete knowledge about the system behavior. As a result, the true probability distribution for transitions is in most cases an uncertain value, determined by either external parameters or con?dence intervals. Interval Markov decision processes (IMDPs) generalize classical MDPs by having interval-valued transition probabilities. They provide a powerful modelling tool for probabilistic systems with an additional variation or uncertainty that re?ects the absence of precise knowledge concerning transition probabilities. In this dissertation, we focus on decision algorithms for modelling and performance evaluation of such probabilistic systems leveraging techniques from mathematical optimization. From a modelling viewpoint, we address probabilistic bisimulations to reduce the size of the system models while preserving the logical properties they satisfy. We also discuss the key ingredients to construct systems by composing them out of smaller components running in parallel. Furthermore, we introduce a novel stochastic model, Uncertain weighted Markov Decision Processes (UwMDPs), so as to capture quantities like preferences or priorities in a nondeterministic scenario with uncertainties. This model is close to the model of IMDPs but more convenient to work with in the context of bisimulation minimization. From a performance evaluation perspective, we consider the problem of multi-objective robust strategy synthesis for IMDPs, where the aim is to ?nd a robust strategy that guarantees the satisfaction of multiple properties at the same time in face of the transition probability uncertainty. In this respect, we discuss the computational complexity of the problem and present a value iteration-based decision algorithm to approximate the Pareto set of achievable optimal points. Moreover, we consider the problem of computing maximal/minimal reward-bounded reachability probabilities on UwMDPs, for which we present an ef?cient algorithm running in pseudo-polynomial time. We demonstrate the practical effectiveness of our proposed approaches by applying them to a collection of real-world case studies using several prototypical tools.Markov-Entscheidungsprozesse (MEPe) bilden den Rahmen für die Modellierung von Systemen, die sowohl stochastisches als auch nichtdeterministisches Verhalten beinhalten. Diese Modellklasse hat ein breites Anwendungsfeld in der Lösung sequentieller Entscheidungsprobleme und wird erfolgreich in der Operationsforschung, der künstlichen Intelligenz und in der stochastischen Kontrolltheorie eingesetzt. Im Bereich der nebenläu?gen probabilistischen Systeme wurde sie konservativ zu probabilistischen Automaten erweitert. Verwendet man MEPe jedoch zur Modellierung physikalischer Systeme so zeigt es sich, dass sie an einer Reihe von Einschränkungen leiden. Eines der schwerwiegendsten Probleme ist, dass das tatsächliche Verhalten des betrachteten Systems zumeist nicht vollständig bekannt ist. Durch Messfehler und Diskretisierungsartefakte ist ein Verlust an Genauigkeit unvermeidbar. Die tatsächlichen Übergangswahrscheinlichkeitsverteilungen des Systems sind daher in den meisten Fällen nicht exakt bekannt, sondern hängen von äußeren Faktoren ab oder können nur durch Kon?denzintervalle erfasst werden. Intervall Markov-Entscheidungsprozesse (IMEPe) verallgemeinern klassische MEPe dadurch, dass die möglichen Übergangswahrscheinlichkeitsverteilungen durch Intervalle ausgedrückt werden können. IMEPe sind daher ein mächtiges Modellierungswerkzeug für probabilistische Systeme mit unbestimmtem Verhalten, dass sich dadurch ergibt, dass das exakte Verhalten des realen Systems nicht bekannt ist. In dieser Doktorarbeit konzentrieren wir uns auf Entscheidungsverfahren für die Modellierung und die Auswertung der Eigenschaften solcher probabilistischer Systeme indem wir Methoden der mathematischen Optimierung einsetzen. Im Bereich der Modellierung betrachten wir probabilistische Bisimulation um die Größe des Systemmodells zu reduzieren während wir gleichzeitig die logischen Eigenschaften erhalten. Wir betrachten außerdem die Schlüsseltechniken um Modelle aus kleineren Komponenten, die parallel ablaufen, kompositionell zu generieren. Weiterhin führen wir eine neue Art von stochastischen Modellen ein, sogenannte Unsichere Gewichtete Markov-Entscheidungsprozesse (UgMEPe), um Eigenschaften wie Implementierungsentscheidungen und Benutzerprioritäten in einem nichtdeterministischen Szenario ausdrücken zu können. Dieses Modell ähnelt IMEPe, ist aber besser für die Minimierung bezüglich Bisimulation geeignet. Im Bereich der Auswertung von Modelleigenschaften betrachten wir das Problem, Strategien zu generieren, die in der Lage sind den Nichtdeterminismus so aufzulösen, dass mehrere gewünschte Eigenschaften gleichzeitig erfüllt werden können, wobei jede mögliche Auswahl von Wahrscheinlichkeitsverteilungen aus den Übergangsintervallen zu respektieren ist. Wir betrachten die Komplexitätsklasse dieses Problems und diskutieren einen auf Werte-Iteration beruhenden Algorithmus um die Pareto-Menge der erreichbaren optimalen Punkte anzunähern. Weiterhin betrachten wir das Problem, minimale und maximale Erreichbarkeitswahrscheinlichkeiten zu berechnen, wenn wir eine obere Grenze für dieakkumulierten Pfadkosten einhalten müssen. Für dieses Problem diskutieren wir einen ef?zienten Algorithmus mit pseudopolynomieller Zeit. Wir zeigen die Ef?zienz unserer Ansätze in der Praxis, indem wir sie prototypisch implementieren und auf eine Reihe von realistischen Fallstudien anwenden
Lumpability for Uncertain Continuous-Time Markov Chains
The assumption of perfect knowledge of rate parameters in continuous-time Markov chains (CTMCs) is undermined when confronted with reality, where they may be uncertain due to lack of information or because of measurement noise. In this paper we consider uncertain CTMCs, where rates are assumed to vary non-deterministically with time from bounded continuous intervals. This leads to a semantics which associates each state with the reachable set of its probability under all possible choices of the uncertain rates. We develop a notion of lumpability which identifies a partition of states where each block preserves the reachable set of the sum of its probabilities, essentially lifting the well-known CTMC ordinary lumpability to the uncertain setting. We proceed with this analogy with two further contributions: a logical characterization of uncertain CTMC lumping in terms of continuous stochastic logic; and a polynomial time and space algorithm for the minimization of uncertain CTMCs by partition refinement, using the CTMC lumping algorithm as an inner step. As a case study, we show that the minimizations in a substantial number of CTMC models reported in the literature are robust with respect to uncertainties around their original, fixed, rate values
Strategies for MDP Bisimilarity Equivalence and Inequivalence
A labelled Markov decision process (MDP) is a labelled Markov chain with nondeterminism; i.e., together with a strategy a labelled MDP induces a labelled Markov chain. Motivated by applications to the verification of probabilistic noninterference in security, we study problems whether there exist strategies such that the labelled MDPs become bisimilarity equivalent/inequivalent. We show that the equivalence problem is decidable; in fact, it is EXPTIME-complete and becomes NP-complete if one of the MDPs is a Markov chain. Concerning the inequivalence problem, we show that (1) it is decidable in polynomial time; (2) if there are strategies for inequivalence then there are memoryless strategies for inequivalence; (3) such memoryless strategies can be computed in polynomial time
A tutorial on interactive Markov chains
Interactive Markov chains (IMCs) constitute a powerful sto- chastic model that extends both continuous-time Markov chains and labelled transition systems. IMCs enable a wide range of modelling and analysis techniques and serve as a semantic model for many industrial and scientific formalisms, such as AADL, GSPNs and many more. Applications cover various engineering contexts ranging from industrial system-on-chip manufacturing to satellite designs. We present a survey of the state-of-the-art in modelling and analysis of IMCs.\ud
We cover a set of techniques that can be utilised for compositional modelling, state space generation and reduction, and model checking. The significance of the presented material and corresponding tools is highlighted through multiple case studies
Parameter Synthesis for Markov Models
Markov chain analysis is a key technique in reliability engineering. A
practical obstacle is that all probabilities in Markov models need to be known.
However, system quantities such as failure rates or packet loss ratios, etc.
are often not---or only partially---known. This motivates considering
parametric models with transitions labeled with functions over parameters.
Whereas traditional Markov chain analysis evaluates a reliability metric for a
single, fixed set of probabilities, analysing parametric Markov models focuses
on synthesising parameter values that establish a given reliability or
performance specification . Examples are: what component failure rates
ensure the probability of a system breakdown to be below 0.00000001?, or which
failure rates maximise reliability? This paper presents various analysis
algorithms for parametric Markov chains and Markov decision processes. We focus
on three problems: (a) do all parameter values within a given region satisfy
?, (b) which regions satisfy and which ones do not?, and (c)
an approximate version of (b) focusing on covering a large fraction of all
possible parameter values. We give a detailed account of the various
algorithms, present a software tool realising these techniques, and report on
an extensive experimental evaluation on benchmarks that span a wide range of
applications.Comment: 38 page
- …