6,105 research outputs found

    Approximating Labelled Markov Processes Again!

    Get PDF

    Distribution-based bisimulation for labelled Markov processes

    Full text link
    In this paper we propose a (sub)distribution-based bisimulation for labelled Markov processes and compare it with earlier definitions of state and event bisimulation, which both only compare states. In contrast to those state-based bisimulations, our distribution bisimulation is weaker, but corresponds more closely to linear properties. We construct a logic and a metric to describe our distribution bisimulation and discuss linearity, continuity and compositional properties.Comment: Accepted by FORMATS 201

    Quantitative Approximation of the Probability Distribution of a Markov Process by Formal Abstractions

    Full text link
    The goal of this work is to formally abstract a Markov process evolving in discrete time over a general state space as a finite-state Markov chain, with the objective of precisely approximating its state probability distribution in time, which allows for its approximate, faster computation by that of the Markov chain. The approach is based on formal abstractions and employs an arbitrary finite partition of the state space of the Markov process, and the computation of average transition probabilities between partition sets. The abstraction technique is formal, in that it comes with guarantees on the introduced approximation that depend on the diameters of the partitions: as such, they can be tuned at will. Further in the case of Markov processes with unbounded state spaces, a procedure for precisely truncating the state space within a compact set is provided, together with an error bound that depends on the asymptotic properties of the transition kernel of the original process. The overall abstraction algorithm, which practically hinges on piecewise constant approximations of the density functions of the Markov process, is extended to higher-order function approximations: these can lead to improved error bounds and associated lower computational requirements. The approach is practically tested to compute probabilistic invariance of the Markov process under study, and is compared to a known alternative approach from the literature.Comment: 29 pages, Journal of Logical Methods in Computer Scienc

    Approximating a Behavioural Pseudometric without Discount for<br> Probabilistic Systems

    Full text link
    Desharnais, Gupta, Jagadeesan and Panangaden introduced a family of behavioural pseudometrics for probabilistic transition systems. These pseudometrics are a quantitative analogue of probabilistic bisimilarity. Distance zero captures probabilistic bisimilarity. Each pseudometric has a discount factor, a real number in the interval (0, 1]. The smaller the discount factor, the more the future is discounted. If the discount factor is one, then the future is not discounted at all. Desharnais et al. showed that the behavioural distances can be calculated up to any desired degree of accuracy if the discount factor is smaller than one. In this paper, we show that the distances can also be approximated if the future is not discounted. A key ingredient of our algorithm is Tarski's decision procedure for the first order theory over real closed fields. By exploiting the Kantorovich-Rubinstein duality theorem we can restrict to the existential fragment for which more efficient decision procedures exist

    Linear Distances between Markov Chains

    Get PDF
    We introduce a general class of distances (metrics) between Markov chains, which are based on linear behaviour. This class encompasses distances given topologically (such as the total variation distance or trace distance) as well as by temporal logics or automata. We investigate which of the distances can be approximated by observing the systems, i.e. by black-box testing or simulation, and we provide both negative and positive results

    Elimination of Intermediate Species in Multiscale Stochastic Reaction Networks

    Full text link
    We study networks of biochemical reactions modelled by continuous-time Markov processes. Such networks typically contain many molecular species and reactions and are hard to study analytically as well as by simulation. Particularly, we are interested in reaction networks with intermediate species such as the substrate-enzyme complex in the Michaelis-Menten mechanism. These species are virtually in all real-world networks, they are typically short-lived, degraded at a fast rate and hard to observe experimentally. We provide conditions under which the Markov process of a multiscale reaction network with intermediate species is approximated in finite dimensional distribution by the Markov process of a simpler reduced reaction network without intermediate species. We do so by embedding the Markov processes into a one-parameter family of processes, where reaction rates and species abundances are scaled in the parameter. Further, we show that there are close links between these stochastic models and deterministic ODE models of the same networks

    Equilibria, Fixed Points, and Complexity Classes

    Get PDF
    Many models from a variety of areas involve the computation of an equilibrium or fixed point of some kind. Examples include Nash equilibria in games; market equilibria; computing optimal strategies and the values of competitive games (stochastic and other games); stable configurations of neural networks; analysing basic stochastic models for evolution like branching processes and for language like stochastic context-free grammars; and models that incorporate the basic primitives of probability and recursion like recursive Markov chains. It is not known whether these problems can be solved in polynomial time. There are certain common computational principles underlying different types of equilibria, which are captured by the complexity classes PLS, PPAD, and FIXP. Representative complete problems for these classes are respectively, pure Nash equilibria in games where they are guaranteed to exist, (mixed) Nash equilibria in 2-player normal form games, and (mixed) Nash equilibria in normal form games with 3 (or more) players. This paper reviews the underlying computational principles and the corresponding classes

    Fluid passage-time calculation in large Markov models

    Get PDF
    Recent developments in the analysis of large Markov models facilitate the fast approximation of transient characteristics of the underlying stochastic process. So-called fluid analysis makes it possible to consider previously intractable models whose underlying discrete state space grows exponentially as model components are added. In this work, we show how fluid approximation techniques may be used to extract passage-time measures from performance models. We focus on two types of passage measure: passage-times involving individual components; as well as passage-times which capture the time taken for a population of components to evolve. Specifically, we show that for models of sufficient scale, passage-time distributions can be well approximated by a deterministic fluid-derived passage-time measure. Where models are not of sufficient scale, we are able to generate approximate bounds for the entire cumulative distribution function of these passage-time random variables, using moment-based techniques. Finally, we show that for some passage-time measures involving individual components the cumulative distribution function can be directly approximated by fluid techniques
    • …
    corecore