3,678 research outputs found

    Computationally Efficient Estimation of the Spectral Gap of a Markov Chain

    Full text link
    We consider the problem of estimating from sample paths the absolute spectral gap γ\gamma_* of a reversible, irreducible and aperiodic Markov chain (Xt)tN(X_t)_{t \in \mathbb{N}} over a finite state space Ω\Omega. We propose the UCPI{\tt UCPI} (Upper Confidence Power Iteration) algorithm for this problem, a low-complexity algorithm which estimates the spectral gap in time O(n){\cal O}(n) and memory space O((lnn)2){\cal O}((\ln n)^2) given nn samples. This is in stark contrast with most known methods which require at least memory space O(Ω){\cal O}(|\Omega|), so that they cannot be applied to large state spaces. Furthermore, UCPI{\tt UCPI} is amenable to parallel implementation.Comment: 32 page

    Faster quantum mixing for slowly evolving sequences of Markov chains

    Get PDF
    Markov chain methods are remarkably successful in computational physics, machine learning, and combinatorial optimization. The cost of such methods often reduces to the mixing time, i.e., the time required to reach the steady state of the Markov chain, which scales as δ1\delta^{-1}, the inverse of the spectral gap. It has long been conjectured that quantum computers offer nearly generic quadratic improvements for mixing problems. However, except in special cases, quantum algorithms achieve a run-time of O(δ1N)\mathcal{O}(\sqrt{\delta^{-1}} \sqrt{N}), which introduces a costly dependence on the Markov chain size N,N, not present in the classical case. Here, we re-address the problem of mixing of Markov chains when these form a slowly evolving sequence. This setting is akin to the simulated annealing setting and is commonly encountered in physics, material sciences and machine learning. We provide a quantum memory-efficient algorithm with a run-time of O(δ1N4)\mathcal{O}(\sqrt{\delta^{-1}} \sqrt[4]{N}), neglecting logarithmic terms, which is an important improvement for large state spaces. Moreover, our algorithms output quantum encodings of distributions, which has advantages over classical outputs. Finally, we discuss the run-time bounds of mixing algorithms and show that, under certain assumptions, our algorithms are optimal.Comment: 20 pages, 2 figure

    A Markov Chain based method for generating long-range dependence

    Full text link
    This paper describes a model for generating time series which exhibit the statistical phenomenon known as long-range dependence (LRD). A Markov Modulated Process based upon an infinite Markov chain is described. The work described is motivated by applications in telecommunications where LRD is a known property of time-series measured on the internet. The process can generate a time series exhibiting LRD with known parameters and is particularly suitable for modelling internet traffic since the time series is in terms of ones and zeros which can be interpreted as data packets and inter-packet gaps. The method is extremely simple computationally and analytically and could prove more tractable than other methods described in the literatureComment: 8 pages, 2 figure

    CLTs and asymptotic variance of time-sampled Markov chains

    Get PDF
    For a Markov transition kernel P and a probability distribution μ on nonnegative integers, a time-sampled Markov chain evolves according to the transition kernel Pμ = Σkμ(k)Pk. In this note we obtain CLT conditions for time-sampled Markov chains and derive a spectral formula for the asymptotic variance. Using these results we compare efficiency of Barker's and Metropolis algorithms in terms of asymptotic variance
    corecore