450 research outputs found
Distribution-based bisimulation for labelled Markov processes
In this paper we propose a (sub)distribution-based bisimulation for labelled
Markov processes and compare it with earlier definitions of state and event
bisimulation, which both only compare states. In contrast to those state-based
bisimulations, our distribution bisimulation is weaker, but corresponds more
closely to linear properties. We construct a logic and a metric to describe our
distribution bisimulation and discuss linearity, continuity and compositional
properties.Comment: Accepted by FORMATS 201
Game Refinement Relations and Metrics
We consider two-player games played over finite state spaces for an infinite
number of rounds. At each state, the players simultaneously choose moves; the
moves determine a successor state. It is often advantageous for players to
choose probability distributions over moves, rather than single moves. Given a
goal, for example, reach a target state, the question of winning is thus a
probabilistic one: what is the maximal probability of winning from a given
state?
On these game structures, two fundamental notions are those of equivalences
and metrics. Given a set of winning conditions, two states are equivalent if
the players can win the same games with the same probability from both states.
Metrics provide a bound on the difference in the probabilities of winning
across states, capturing a quantitative notion of state similarity.
We introduce equivalences and metrics for two-player game structures, and we
show that they characterize the difference in probability of winning games
whose goals are expressed in the quantitative mu-calculus. The quantitative
mu-calculus can express a large set of goals, including reachability, safety,
and omega-regular properties. Thus, we claim that our relations and metrics
provide the canonical extensions to games, of the classical notion of
bisimulation for transition systems. We develop our results both for
equivalences and metrics, which generalize bisimulation, and for asymmetrical
versions, which generalize simulation
Algorithms for Game Metrics
Simulation and bisimulation metrics for stochastic systems provide a
quantitative generalization of the classical simulation and bisimulation
relations. These metrics capture the similarity of states with respect to
quantitative specifications written in the quantitative {\mu}-calculus and
related probabilistic logics. We first show that the metrics provide a bound
for the difference in long-run average and discounted average behavior across
states, indicating that the metrics can be used both in system verification,
and in performance evaluation. For turn-based games and MDPs, we provide a
polynomial-time algorithm for the computation of the one-step metric distance
between states. The algorithm is based on linear programming; it improves on
the previous known exponential-time algorithm based on a reduction to the
theory of reals. We then present PSPACE algorithms for both the decision
problem and the problem of approximating the metric distance between two
states, matching the best known algorithms for Markov chains. For the
bisimulation kernel of the metric our algorithm works in time O(n^4) for both
turn-based games and MDPs; improving the previously best known O(n^9\cdot
log(n)) time algorithm for MDPs. For a concurrent game G, we show that
computing the exact distance between states is at least as hard as computing
the value of concurrent reachability games and the square-root-sum problem in
computational geometry. We show that checking whether the metric distance is
bounded by a rational r, can be done via a reduction to the theory of real
closed fields, involving a formula with three quantifier alternations, yielding
O(|G|^O(|G|^5)) time complexity, improving the previously known reduction,
which yielded O(|G|^O(|G|^7)) time complexity. These algorithms can be iterated
to approximate the metrics using binary search.Comment: 27 pages. Full version of the paper accepted at FSTTCS 200
Scalable methods for computing state similarity in deterministic Markov Decision Processes
We present new algorithms for computing and approximating bisimulation
metrics in Markov Decision Processes (MDPs). Bisimulation metrics are an
elegant formalism that capture behavioral equivalence between states and
provide strong theoretical guarantees on differences in optimal behaviour.
Unfortunately, their computation is expensive and requires a tabular
representation of the states, which has thus far rendered them impractical for
large problems. In this paper we present a new version of the metric that is
tied to a behavior policy in an MDP, along with an analysis of its theoretical
properties. We then present two new algorithms for approximating bisimulation
metrics in large, deterministic MDPs. The first does so via sampling and is
guaranteed to converge to the true metric. The second is a differentiable loss
which allows us to learn an approximation even for continuous state MDPs, which
prior to this work had not been possible.Comment: To appear in Proceedings of the Thirty-Fourth AAAI Conference on
Artificial Intelligence (AAAI-20
Computing Distances between Probabilistic Automata
We present relaxed notions of simulation and bisimulation on Probabilistic
Automata (PA), that allow some error epsilon. When epsilon is zero we retrieve
the usual notions of bisimulation and simulation on PAs. We give logical
characterisations of these notions by choosing suitable logics which differ
from the elementary ones, L with negation and L without negation, by the modal
operator. Using flow networks, we show how to compute the relations in PTIME.
This allows the definition of an efficiently computable non-discounted distance
between the states of a PA. A natural modification of this distance is
introduced, to obtain a discounted distance, which weakens the influence of
long term transitions. We compare our notions of distance to others previously
defined and illustrate our approach on various examples. We also show that our
distance is not expansive with respect to process algebra operators. Although L
without negation is a suitable logic to characterise epsilon-(bi)simulation on
deterministic PAs, it is not for general PAs; interestingly, we prove that it
does characterise weaker notions, called a priori epsilon-(bi)simulation, which
we prove to be NP-difficult to decide.Comment: In Proceedings QAPL 2011, arXiv:1107.074
Process algebra for performance evaluation
This paper surveys the theoretical developments in the field of stochastic process algebras, process algebras where action occurrences may be subject to a delay that is determined by a random variable. A huge class of resource-sharing systems – like large-scale computers, client–server architectures, networks – can accurately be described using such stochastic specification formalisms. The main emphasis of this paper is the treatment of operational semantics, notions of equivalence, and (sound and complete) axiomatisations of these equivalences for different types of Markovian process algebras, where delays are governed by exponential distributions. Starting from a simple actionless algebra for describing time-homogeneous continuous-time Markov chains, we consider the integration of actions and random delays both as a single entity (like in known Markovian process algebras like TIPP, PEPA and EMPA) and as separate entities (like in the timed process algebras timed CSP and TCCS). In total we consider four related calculi and investigate their relationship to existing Markovian process algebras. We also briefly indicate how one can profit from the separation of time and actions when incorporating more general, non-Markovian distributions
- …