3,678 research outputs found
Computationally Efficient Estimation of the Spectral Gap of a Markov Chain
We consider the problem of estimating from sample paths the absolute spectral
gap of a reversible, irreducible and aperiodic Markov chain
over a finite state space . We propose the
(Upper Confidence Power Iteration) algorithm for this problem, a
low-complexity algorithm which estimates the spectral gap in time
and memory space given samples. This is in stark
contrast with most known methods which require at least memory space , so that they cannot be applied to large state spaces.
Furthermore, is amenable to parallel implementation.Comment: 32 page
Faster quantum mixing for slowly evolving sequences of Markov chains
Markov chain methods are remarkably successful in computational physics,
machine learning, and combinatorial optimization. The cost of such methods
often reduces to the mixing time, i.e., the time required to reach the steady
state of the Markov chain, which scales as , the inverse of the
spectral gap. It has long been conjectured that quantum computers offer nearly
generic quadratic improvements for mixing problems. However, except in special
cases, quantum algorithms achieve a run-time of , which introduces a costly dependence on the Markov chain size
not present in the classical case. Here, we re-address the problem of mixing of
Markov chains when these form a slowly evolving sequence. This setting is akin
to the simulated annealing setting and is commonly encountered in physics,
material sciences and machine learning. We provide a quantum memory-efficient
algorithm with a run-time of ,
neglecting logarithmic terms, which is an important improvement for large state
spaces. Moreover, our algorithms output quantum encodings of distributions,
which has advantages over classical outputs. Finally, we discuss the run-time
bounds of mixing algorithms and show that, under certain assumptions, our
algorithms are optimal.Comment: 20 pages, 2 figure
A Markov Chain based method for generating long-range dependence
This paper describes a model for generating time series which exhibit the
statistical phenomenon known as long-range dependence (LRD). A Markov Modulated
Process based upon an infinite Markov chain is described. The work described is
motivated by applications in telecommunications where LRD is a known property
of time-series measured on the internet. The process can generate a time series
exhibiting LRD with known parameters and is particularly suitable for modelling
internet traffic since the time series is in terms of ones and zeros which can
be interpreted as data packets and inter-packet gaps. The method is extremely
simple computationally and analytically and could prove more tractable than
other methods described in the literatureComment: 8 pages, 2 figure
CLTs and asymptotic variance of time-sampled Markov chains
For a Markov transition kernel P and a probability distribution
μ on nonnegative integers, a time-sampled Markov chain evolves according
to the transition kernel Pμ = Σkμ(k)Pk. In this note we obtain CLT
conditions for time-sampled Markov chains and derive a spectral formula
for the asymptotic variance. Using these results we compare efficiency of
Barker's and Metropolis algorithms in terms of asymptotic variance
- …