5 research outputs found

    Analysis of an on-line algorithm for solving large Markov chains

    Get PDF
    Algorithms for ranking of web pages such as Google Page-Rank assign importance scores according to a stationary distribution of a Markov random walk on the web graph. Although in the classical search scheme the ranking scores are pre-computed off-line, several challenging problems in contemporary web search, such as personalized search and search in entity graphs, require on-line PageRank computation. In this work we present a probabilistic point of view for an original on-line algorithm proposed by Abiteboul, Preda and Cobena [1]. According to this algorithm, at the beginning, each page receives an equal amount of ‘cash’, and every time when a page is visited by a random walk, it distributes its cash among its outgoing links. The PageRank score of a page is then proportional to the amount of cash transferred from this page. In this paper, instead of dealing with the variable ‘cash’, which is continuous, we create a two-dimensional discrete ‘cat and mouse’ Markov chain such that the amount of cash on each page can be expressed via probabilities for this new Markov chain. We also indicate further research directions, such as the analysis of the cat and mouse chain in the case when the cat’s movements are described by a classical stochastic process such as the M/M/1 random walk

    A scaling analysis of a cat and mouse Markov chain

    Get PDF
    Motivated by an original on-line page-ranking algorithm, starting from an arbitrary Markov chain (Cn)(C_n) on a discrete state space S{\cal S}, a Markov chain (Cn,Mn)(C_n,M_n) on the product space S2{\cal S}^2, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain behaves like the original Markov chain and the second component changes only when both coordinates are equal. The asymptotic properties of this Markov chain are investigated. A representation of its invariant measure is in particular obtained. When the state space is infinite it is shown that this Markov chain is in fact null recurrent if the initial Markov chain (Cn)(C_n) is positive recurrent and reversible. In this context, the scaling properties of the location of the second component, the mouse, are investigated in various situations: simple random walks in Z\mathbb{Z} and Z2\mathbb{Z}^2, reflected simple random walk in N\mathbb{N} and also in a continuous time setting. For several of these processes, a time scaling with rapid growth gives an interesting asymptotic behavior related to limit results for occupation times and rare events of Markov processes.\u

    Red Light Green Light Method for Solving Large Markov Chains

    Get PDF
    Discrete-time discrete-state finite Markov chains are versatile mathematical models for a wide range of real-life stochastic processes. One of most common tasks in studies of Markov chains is computation of the stationary distribution. Without loss of generality, and drawing our motivation from applications to large networks, we interpret this problem as one of computing the stationary distribution of a random walk on a graph. We propose a new controlled, easily distributed algorithm for this task, briefly summarized as follows: at the beginning, each node receives a fixed amount of cash (positive or negative), and at each iteration, some nodes receive `green light' to distribute their wealth or debt proportionally to the transition probabilities of the Markov chain; the stationary probability of a node is computed as a ratio of the cash distributed by this a node to the total cash distributed by all nodes together. Our method includes as special cases a wide range of known, very different, and previously disconnected methods including power iterations, Gauss-Southwell, and online distributed algorithms. We prove exponential convergence of our method, demonstrate its high efficiency, and derive scheduling strategies for the green-light, that achieve convergence rate faster than state-of-the-art algorithms

    Red Light Green Light Method for Solving Large Markov Chains

    Get PDF
    Discrete-time discrete-state finite Markov chains are versatile mathematical models for a wide range of real-life stochastic processes. One of most common tasks in studies of Markov chains is computation of the stationary distribution. We propose a new general controlled, easily distributed algorithm for this task. The algorithm includes as special cases a wide range of known, very different, and previously disconnected methods including power iterations, versions of Gauss-Southwell formerly restricted to substochastic matrices, and online distributed algorithms. We prove exponential convergence of our method, demonstrate its high efficiency, and derive straightforward control strategies that achieve convergence rates faster than state-of-the-art algorithms.</p

    Self-Evaluation Applied Mathematics 2003-2008 University of Twente

    Get PDF
    This report contains the self-study for the research assessment of the Department of Applied Mathematics (AM) of the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS) at the University of Twente (UT). The report provides the information for the Research Assessment Committee for Applied Mathematics, dealing with mathematical sciences at the three universities of technology in the Netherlands. It describes the state of affairs pertaining to the period 1 January 2003 to 31 December 2008
    corecore