626 research outputs found

    Computing Probabilistic Bisimilarity Distances

    Get PDF
    Behavioural equivalences like probabilistic bisimilarity rely on the transition probabilities and, as a result, are sensitive to minuscule changes of those probabilities. Such behavioural equivalences are not robust, as first observed by Giacalone, Jou and Smolka. Probabilistic bisimilarity distances, a robust quantitative generalization of probabilistic bisimilarity, capture the similarity of the behaviour of states of a probabilistic model. The smaller the distance, the more alike the states behave. In particular, states are probabilistic bisimilar if and only if the distance between them is zero. In this dissertation, we focus on algorithms to compute probabilistic bisimilarity distances for two probabilistic models: labelled Markov chains and probabilistic automata. In the late nineties, Desharnais, Gupta, Jagadeesan and Panangaden defined probabilistic bisimilarity distances on the states of a labelled Markov chain. This provided a quantitative generalization of probabilistic bisimilarity, which was introduced by Larsen and Skou a decade earlier. Several algorithms to approximate and compute these probabilistic bisimilarity distances have been put forward. In this dissertation, we correct and generalize some of these policy iteration algorithms. Moreover, we develop several new algorithms which have better performance in practice and can handle much larger systems. Similarly, Deng, Chothia, Palamidessi and Pang presented probabilistic bisimilarity distances on the states of a probabilistic automaton. This provided a robust quantitative generalization of probabilistic bisimilarity introduced by Segala and Lynch. Although the complexity of computing probabilistic bisimilarity distances for probabilistic automata has already been studied and shown to be in NP coNP and PPAD, we are not aware of any practical algorithms to compute those distances. In this dissertation, we provide several key results that may prove to be useful for the development of algorithms to compute probabilistic bisimilarity distances for probabilistic automata. In particular, we present a polynomial time algorithm that decides distance one. Furthermore, we give an alternative characterization of the probabilistic bisimilarity distances as a basis for a policy iteration algorithm

    Computing Probabilistic Bisimilarity Distances for Probabilistic Automata

    Get PDF
    The probabilistic bisimilarity distance of Deng et al. has been proposed as a robust quantitative generalization of Segala and Lynch's probabilistic bisimilarity for probabilistic automata. In this paper, we present a characterization of the bisimilarity distance as the solution of a simple stochastic game. The characterization gives us an algorithm to compute the distances by applying Condon's simple policy iteration on these games. The correctness of Condon's approach, however, relies on the assumption that the games are stopping. Our games may be non-stopping in general, yet we are able to prove termination for this extended class of games. Already other algorithms have been proposed in the literature to compute these distances, with complexity in UPcoUP\textbf{UP} \cap \textbf{coUP} and \textbf{PPAD}. Despite the theoretical relevance, these algorithms are inefficient in practice. To the best of our knowledge, our algorithm is the first practical solution. The characterization of the probabilistic bisimilarity distance mentioned above crucially uses a dual presentation of the Hausdorff distance due to M\'emoli. As an additional contribution, in this paper we show that M\'emoli's result can be used also to prove that the bisimilarity distance bounds the difference in the maximal (or minimal) probability of two states to satisfying arbitrary ω\omega-regular properties, expressed, eg., as LTL formulas

    Game Characterization of Probabilistic Bisimilarity, and Applications to Pushdown Automata

    Full text link
    We study the bisimilarity problem for probabilistic pushdown automata (pPDA) and subclasses thereof. Our definition of pPDA allows both probabilistic and non-deterministic branching, generalising the classical notion of pushdown automata (without epsilon-transitions). We first show a general characterization of probabilistic bisimilarity in terms of two-player games, which naturally reduces checking bisimilarity of probabilistic labelled transition systems to checking bisimilarity of standard (non-deterministic) labelled transition systems. This reduction can be easily implemented in the framework of pPDA, allowing to use known results for standard (non-probabilistic) PDA and their subclasses. A direct use of the reduction incurs an exponential increase of complexity, which does not matter in deriving decidability of bisimilarity for pPDA due to the non-elementary complexity of the problem. In the cases of probabilistic one-counter automata (pOCA), of probabilistic visibly pushdown automata (pvPDA), and of probabilistic basic process algebras (i.e., single-state pPDA) we show that an implicit use of the reduction can avoid the complexity increase; we thus get PSPACE, EXPTIME, and 2-EXPTIME upper bounds, respectively, like for the respective non-probabilistic versions. The bisimilarity problems for OCA and vPDA are known to have matching lower bounds (thus being PSPACE-complete and EXPTIME-complete, respectively); we show that these lower bounds also hold for fully probabilistic versions that do not use non-determinism

    Modal logics are coalgebraic

    Get PDF
    Applications of modal logics are abundant in computer science, and a large number of structurally different modal logics have been successfully employed in a diverse spectrum of application contexts. Coalgebraic semantics, on the other hand, provides a uniform and encompassing view on the large variety of specific logics used in particular domains. The coalgebraic approach is generic and compositional: tools and techniques simultaneously apply to a large class of application areas and can moreover be combined in a modular way. In particular, this facilitates a pick-and-choose approach to domain specific formalisms, applicable across the entire scope of application areas, leading to generic software tools that are easier to design, to implement, and to maintain. This paper substantiates the authors' firm belief that the systematic exploitation of the coalgebraic nature of modal logic will not only have impact on the field of modal logic itself but also lead to significant progress in a number of areas within computer science, such as knowledge representation and concurrency/mobility

    Efficient and Modular Coalgebraic Partition Refinement

    Full text link
    We present a generic partition refinement algorithm that quotients coalgebraic systems by behavioural equivalence, an important task in system analysis and verification. Coalgebraic generality allows us to cover not only classical relational systems but also, e.g. various forms of weighted systems and furthermore to flexibly combine existing system types. Under assumptions on the type functor that allow representing its finite coalgebras in terms of nodes and edges, our algorithm runs in time O(mlogn)\mathcal{O}(m\cdot \log n) where nn and mm are the numbers of nodes and edges, respectively. The generic complexity result and the possibility of combining system types yields a toolbox for efficient partition refinement algorithms. Instances of our generic algorithm match the run-time of the best known algorithms for unlabelled transition systems, Markov chains, deterministic automata (with fixed alphabets), Segala systems, and for color refinement.Comment: Extended journal version of the conference paper arXiv:1705.08362. Beside reorganization of the material, the introductory section 3 is entirely new and the other new section 7 contains new mathematical result
    corecore