6,187 research outputs found

    Robustness of large-scale stochastic matrices to localized perturbations

    Get PDF
    Upper bounds are derived on the total variation distance between the invariant distributions of two stochastic matrices differing on a subset W of rows. Such bounds depend on three parameters: the mixing time and the minimal expected hitting time on W for the Markov chain associated to one of the matrices; and the escape time from W for the Markov chain associated to the other matrix. These results, obtained through coupling techniques, prove particularly useful in scenarios where W is a small subset of the state space, even if the difference between the two matrices is not small in any norm. Several applications to large-scale network problems are discussed, including robustness of Google's PageRank algorithm, distributed averaging and consensus algorithms, and interacting particle systems.Comment: 12 pages, 4 figure

    Theoretical analysis of a Stochastic Approximation approach for computing Quasi-Stationary distributions

    Full text link
    This paper studies a method, which has been proposed in the Physics literature by [8, 7, 10], for estimating the quasi-stationary distribution. In contrast to existing methods in eigenvector estimation, the method eliminates the need for explicit transition matrix manipulation to extract the principal eigenvector. Our paper analyzes the algorithm by casting it as a stochastic approximation algorithm (Robbins-Monro) [23, 16]. In doing so, we prove its convergence and obtain its rate of convergence. Based on this insight, we also give an example where the rate of convergence is very slow. This problem can be alleviated by using an improved version of the algorithm that is given in this paper. Numerical experiments are described that demonstrate the effectiveness of this improved method

    General solution of the Poisson equation for Quasi-Birth-and-Death processes

    Full text link
    We consider the Poisson equation (I−P)u=g(I-P)\boldsymbol{u}=\boldsymbol{g}, where PP is the transition matrix of a Quasi-Birth-and-Death (QBD) process with infinitely many levels, g\bm g is a given infinite dimensional vector and u\bm u is the unknown. Our main result is to provide the general solution of this equation. To this purpose we use the block tridiagonal and block Toeplitz structure of the matrix PP to obtain a set of matrix difference equations, which are solved by constructing suitable resolvent triples

    Bounding inferences for large-scale continuous-time Markov chains : a new approach based on lumping and imprecise Markov chains

    Get PDF
    If the state space of a homogeneous continuous-time Markov chain is too large, making inferences becomes computationally infeasible. Fortunately, the state space of such a chain is usually too detailed for the inferences we are interested in, in the sense that a less detailed—smaller—state space suffices to unambiguously formalise the inference. However, in general this so-called lumped state space inhibits computing exact inferences because the corresponding dynamics are unknown and/or intractable to obtain. We address this issue by considering an imprecise continuous-time Markov chain. In this way, we are able to provide guaranteed lower and upper bounds for the inferences of interest, without suffering from the curse of dimensionality

    Markov-modulated Brownian motion with two reflecting barriers

    Full text link
    We consider a Markov-modulated Brownian motion reflected to stay in a strip [0,B]. The stationary distribution of this process is known to have a simple form under some assumptions. We provide a short probabilistic argument leading to this result and explaining its simplicity. Moreover, this argument allows for generalizations including the distribution of the reflected process at an independent exponentially distributed epoch. Our second contribution concerns transient behavior of the reflected system. We identify the joint law of the processes t,X(t),J(t) at inverse local times.Comment: 13 pages, 1 figur

    Opinion influence and evolution in social networks: a Markovian agents model

    Full text link
    In this paper, the effect on collective opinions of filtering algorithms managed by social network platforms is modeled and investigated. A stochastic multi-agent model for opinion dynamics is proposed, that accounts for a centralized tuning of the strength of interaction between individuals. The evolution of each individual opinion is described by a Markov chain, whose transition rates are affected by the opinions of the neighbors through influence parameters. The properties of this model are studied in a general setting as well as in interesting special cases. A general result is that the overall model of the social network behaves like a high-dimensional Markov chain, which is viable to Monte Carlo simulation. Under the assumption of identical agents and unbiased influence, it is shown that the influence intensity affects the variance, but not the expectation, of the number of individuals sharing a certain opinion. Moreover, a detailed analysis is carried out for the so-called Peer Assembly, which describes the evolution of binary opinions in a completely connected graph of identical agents. It is shown that the Peer Assembly can be lumped into a birth-death chain that can be given a complete analytical characterization. Both analytical results and simulation experiments are used to highlight the emergence of particular collective behaviours, e.g. consensus and herding, depending on the centralized tuning of the influence parameters.Comment: Revised version (May 2018
    • …
    corecore