105 research outputs found

    Computing Probabilistic Bisimilarity Distances via Policy Iteration

    Get PDF
    A transformation mapping a labelled Markov chain to a simple stochastic game is presented. In the resulting simple stochastic game, each vertex corresponds to a pair of states of the labelled Markov chain. The value of a vertex of the simple stochastic game is shown to be equal to the probabilistic bisimilarity distance, a notion due to Desharnais, Gupta, Jagadeesan and Panangaden, of the corresponding pair of states of the labelled Markov chain. Bacci, Bacci, Larsen and Mardare introduced an algorithm to compute the probabilistic bisimilarity distances for a labelled Markov chain. A modification of a basic version of their algorithm for a labelled Markov chain is shown to be the policy iteration algorithm applied to the corresponding simple stochastic game. Furthermore, it is shown that this algorithm takes exponential time in the worst case

    On-the-fly computation of bisimilarity distances

    Get PDF
    We propose a distance between continuous-time Markov chains (CTMCs) and study the problem of computing it by comparing three different algorithmic methodologies: iterative, linear program, and on-the-fly. In a work presented at FoSSaCS’12, Chen et al. characterized the bisimilarity distance of Desharnais et al. between discrete-time Markov chains as an optimal solution of a linear program that can be solved by using the ellipsoid method. Inspired by their result, we propose a novel linear program characterization to compute the distance in the continuoustime setting. Differently from previous proposals, ours has a number of constraints that is bounded by a polynomial in the size of the CTMC. This, in particular, proves that the distance we propose can be computed in polynomial time. Despite its theoretical importance, the proposed linear program characterization turns out to be inefficient in practice. Nevertheless, driven by the encouraging results of our previous work presented at TACAS’13, we propose an efficient on-the-fly algorithm, which, unlike the other mentioned solutions, computes the distances between two given states avoiding an exhaustive exploration of the state space. This technique works by successively refining over-approximations of the target distances using a greedy strategy, which ensures that the state space is further explored only when the current approximations are improved. Tests performed on a consistent set of (pseudo)randomly generated CTMCs show that our algorithm improves, on average, the efficiency of the corresponding iterative and linear program methods with orders of magnitude

    Distribution-based bisimulation for labelled Markov processes

    Full text link
    In this paper we propose a (sub)distribution-based bisimulation for labelled Markov processes and compare it with earlier definitions of state and event bisimulation, which both only compare states. In contrast to those state-based bisimulations, our distribution bisimulation is weaker, but corresponds more closely to linear properties. We construct a logic and a metric to describe our distribution bisimulation and discuss linearity, continuity and compositional properties.Comment: Accepted by FORMATS 201

    Coalgebraic Behavioral Metrics

    Get PDF
    We study different behavioral metrics, such as those arising from both branching and linear-time semantics, in a coalgebraic setting. Given a coalgebra α ⁣:XHX\alpha\colon X \to HX for a functor H ⁣:SetSetH \colon \mathrm{Set}\to \mathrm{Set}, we define a framework for deriving pseudometrics on XX which measure the behavioral distance of states. A crucial step is the lifting of the functor HH on Set\mathrm{Set} to a functor H\overline{H} on the category PMet\mathrm{PMet} of pseudometric spaces. We present two different approaches which can be viewed as generalizations of the Kantorovich and Wasserstein pseudometrics for probability measures. We show that the pseudometrics provided by the two approaches coincide on several natural examples, but in general they differ. If HH has a final coalgebra, every lifting H\overline{H} yields in a canonical way a behavioral distance which is usually branching-time, i.e., it generalizes bisimilarity. In order to model linear-time metrics (generalizing trace equivalences), we show sufficient conditions for lifting distributive laws and monads. These results enable us to employ the generalized powerset construction

    Generalized labelled Markov processes, coalgebraically

    Get PDF
    Coalgebras of measurable spaces are of interest in probability theory as a formalization of Labelled Markov Processes (LMPs). We discuss some general facts related to the notions of bisimulation and cocongruence on these systems, providing a faithful characterization of bisimulation on LMPs on generic measurable spaces. This has been used to prove that bisimilarity on single LMPs is an equivalence, without assuming the state space to be analytic. As the second main contribution, we introduce the first specification rule format to define well-behaved composition operators for LMPs. This allows one to define process description languages on LMPs which are always guaranteed to have a fully-abstract semantics

    Computing Probabilistic Bisimilarity Distances for Probabilistic Automata

    Get PDF
    The probabilistic bisimilarity distance of Deng et al. has been proposed as a robust quantitative generalization of Segala and Lynch's probabilistic bisimilarity for probabilistic automata. In this paper, we present a characterization of the bisimilarity distance as the solution of a simple stochastic game. The characterization gives us an algorithm to compute the distances by applying Condon's simple policy iteration on these games. The correctness of Condon's approach, however, relies on the assumption that the games are stopping. Our games may be non-stopping in general, yet we are able to prove termination for this extended class of games. Already other algorithms have been proposed in the literature to compute these distances, with complexity in UPcoUP\textbf{UP} \cap \textbf{coUP} and \textbf{PPAD}. Despite the theoretical relevance, these algorithms are inefficient in practice. To the best of our knowledge, our algorithm is the first practical solution. The characterization of the probabilistic bisimilarity distance mentioned above crucially uses a dual presentation of the Hausdorff distance due to M\'emoli. As an additional contribution, in this paper we show that M\'emoli's result can be used also to prove that the bisimilarity distance bounds the difference in the maximal (or minimal) probability of two states to satisfying arbitrary ω\omega-regular properties, expressed, eg., as LTL formulas

    Algorithms to Compute Probabilistic Bisimilarity Distances for Labelled Markov Chains

    Get PDF
    In the late nineties, Desharnais, Gupta, Jagadeesan and Panangaden presented probabilistic bisimilarity distances on the states of a labelled Markov chain. This provided a quantitative generalisation of probabilistic bisimilarity introduced by Larsen and Skou a decade earlier. In the last decade, several algorithms to approximate and compute these probabilistic bisimilarity distances have been put forward. In this paper, we correct, improve and generalise some of these algorithms. Furthermore, we compare their performance experimentally

    Asymmetric distances for approximate differential privacy

    Get PDF
    Differential privacy is a widely studied notion of privacy for various models of computation, based on measuring differences between probability distributions. We consider (epsilon,delta)-differential privacy in the setting of labelled Markov chains. For a given epsilon, the parameter delta can be captured by a variant of the total variation distance, which we call lv_{alpha} (where alpha = e^{epsilon}). First we study lv_{alpha} directly, showing that it cannot be computed exactly. However, the associated approximation problem turns out to be in PSPACE and #P-hard. Next we introduce a new bisimilarity distance for bounding lv_{alpha} from above, which provides a tighter bound than previously known distances while remaining computable with the same complexity (polynomial time with an NP oracle). We also propose an alternative bound that can be computed in polynomial time. Finally, we illustrate the distances on case studies

    Deciding Probabilistic Bisimilarity Distance One for Probabilistic Automata

    Get PDF
    Probabilistic bisimilarity, due to Segala and Lynch, is an equivalence relation that captures which states of a probabilistic automaton behave exactly the same. Deng, Chothia, Palamidessi and Pang proposed a robust quantitative generalization of probabilistic bisimilarity. Their probabilistic bisimilarity distances of states of a probabilistic automaton capture the similarity of their behaviour. The smaller the distance, the more alike the states behave. In particular, states are probabilistic bisimilar if and only if their distance is zero. Although the complexity of computing probabilistic bisimilarity distances for probabilistic automata has already been studied and shown to be in NP cap coNP and PPAD, we are not aware of any practical algorithm to compute those distances. In this paper we provide several key results towards algorithms to compute probabilistic bisimilarity distances for probabilistic automata. In particular, we present a polynomial time algorithm that decides distance one. Furthermore, we give an alternative characterization of the probabilistic bisimilarity distances as a basis for a policy iteration algorithm
    corecore