23,560 research outputs found

    A sequential update algorithm for computing the stationary distribution vector in upper block-Hessenberg Markov chains

    Get PDF
    This paper proposes a new algorithm for computing the stationary distribution vector in continuous-time upper block-Hessenberg Markov chains. To this end, we consider the last-block-column-linearly-augmented (LBCL-augmented) truncation of the (infinitesimal) generator of the upper block-Hessenberg Markov chain. The LBCL-augmented truncation is a linearly-augmented truncation such that the augmentation distribution has its probability mass only on the last block column. We first derive an upper bound for the total variation distance between the respective stationary distribution vectors of the original generator and its LBCL-augmented truncation. Based on the upper bound, we then establish a series of linear fractional programming (LFP) problems to obtain augmentation distribution vectors such that the bound converges to zero. Using the optimal solutions of the LFP problems, we construct a matrix-infinite-product (MIP) form of the original (i.e., not approximate) stationary distribution vector and develop a sequential update algorithm for computing the MIP form. Finally, we demonstrate the applicability of our algorithm to BMAP/M/āˆž\infty queues and M/M/ss retrial queues.Comment: The typo in Abstract has been correcte

    Order Reduction of the Chemical Master Equation via Balanced Realisation

    Full text link
    We consider a Markov process in continuous time with a finite number of discrete states. The time-dependent probabilities of being in any state of the Markov chain are governed by a set of ordinary differential equations, whose dimension might be large even for trivial systems. Here, we derive a reduced ODE set that accurately approximates the probabilities of subspaces of interest with a known error bound. Our methodology is based on model reduction by balanced truncation and can be considerably more computationally efficient than the Finite State Projection Algorithm (FSP) when used for obtaining transient responses. We show the applicability of our method by analysing stochastic chemical reactions. First, we obtain a reduced order model for the infinitesimal generator of a Markov chain that models a reversible, monomolecular reaction. In such an example, we obtain an approximation of the output of a model with 301 states by a reduced model with 10 states. Later, we obtain a reduced order model for a catalytic conversion of substrate to a product; and compare its dynamics with a stochastic Michaelis-Menten representation. For this example, we highlight the savings on the computational load obtained by means of the reduced-order model. Finally, we revisit the substrate catalytic conversion by obtaining a lower-order model that approximates the probability of having predefined ranges of product molecules.Comment: 12 pages, 6 figure

    Compositional Approximate Markov Chain Aggregation for PEPA Models

    Get PDF

    Approximate performability and dependability analysis using generalized stochastic Petri Nets

    Get PDF
    Since current day fault-tolerant and distributed computer and communication systems tend to be large and complex, their corresponding performability models will suffer from the same characteristics. Therefore, calculating performability measures from these models is a difficult and time-consuming task.\ud \ud To alleviate the largeness and complexity problem to some extent we use generalized stochastic Petri nets to describe to models and to automatically generate the underlying Markov reward models. Still however, many models cannot be solved with the current numerical techniques, although they are conveniently and often compactly described.\ud \ud In this paper we discuss two heuristic state space truncation techniques that allow us to obtain very good approximations for the steady-state performability while only assessing a few percent of the states of the untruncated model. For a class of reversible models we derive explicit lower and upper bounds on the exact steady-state performability. For a much wider class of models a truncation theorem exists that allows one to obtain bounds for the error made in the truncation. We discuss this theorem in the context of approximate performability models and comment on its applicability. For all the proposed truncation techniques we present examples showing their usefulness

    Improving the Convergence Properties of the Data Augmentation Algorithm with an Application to Bayesian Mixture Modeling

    Full text link
    The reversible Markov chains that drive the data augmentation (DA) and sandwich algorithms define self-adjoint operators whose spectra encode the convergence properties of the algorithms. When the target distribution has uncountable support, as is nearly always the case in practice, it is generally quite difficult to get a handle on these spectra. We show that, if the augmentation space is finite, then (under regularity conditions) the operators defined by the DA and sandwich chains are compact, and the spectra are finite subsets of [0,1)[0,1). Moreover, we prove that the spectrum of the sandwich operator dominates the spectrum of the DA operator in the sense that the ordered elements of the former are all less than or equal to the corresponding elements of the latter. As a concrete example, we study a widely used DA algorithm for the exploration of posterior densities associated with Bayesian mixture models [J. Roy. Statist. Soc. Ser. B 56 (1994) 363--375]. In particular, we compare this mixture DA algorithm with an alternative algorithm proposed by Fr\"{u}hwirth-Schnatter [J. Amer. Statist. Assoc. 96 (2001) 194--209] that is based on random label switching.Comment: Published in at http://dx.doi.org/10.1214/11-STS365 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • ā€¦
    corecore