1,098 research outputs found
Infinite dimensional entangled Markov chains
We continue the analysis of nontrivial examples of quantum Markov processes.
This is done by applying the construction of entangled Markov chains obtained
from classical Markov chains with infinite state--space. The formula giving the
joint correlations arises from the corresponding classical formula by replacing
the usual matrix multiplication by the Schur multiplication. In this way, we
provide nontrivial examples of entangled Markov chains on , being any infinite dimensional type
factor, a finite interval of , and the bar the von Neumann tensor
product between von Neumann algebras. We then have new nontrivial examples of
quantum random walks which could play a r\^ole in quantum information theory.
In view of applications to quantum statistical mechanics too, we see that the
ergodic type of an entangled Markov chain is completely determined by the
corresponding ergodic type of the underlying classical chain, provided that the
latter admits an invariant probability distribution. This result parallels the
corresponding one relative to the finite dimensional case.
Finally, starting from random walks on discrete ICC groups, we exhibit
examples of quantum Markov processes based on type von Neumann factors.Comment: 16 page
Mixing Time of the Rudvalis Shuffle
We extend a technique for lower-bounding the mixing time of card-shuffling
Markov chains, and use it to bound the mixing time of the Rudvalis Markov
chain, as well as two variants considered by Diaconis and Saloff-Coste. We show
that in each case Theta(n^3 log n) shuffles are required for the permutation to
randomize, which matches (up to constants) previously known upper bounds. In
contrast, for the two variants, the mixing time of an individual card is only
Theta(n^2) shuffles.Comment: 9 page
Quantum speedup of classical mixing processes
Most approximation algorithms for #P-complete problems (e.g., evaluating the
permanent of a matrix or the volume of a polytope) work by reduction to the
problem of approximate sampling from a distribution over a large set
. This problem is solved using the {\em Markov chain Monte Carlo} method: a
sparse, reversible Markov chain on with stationary distribution
is run to near equilibrium. The running time of this random walk algorithm, the
so-called {\em mixing time} of , is as shown
by Aldous, where is the spectral gap of and is the minimum
value of . A natural question is whether a speedup of this classical
method to , the diameter of the graph
underlying , is possible using {\em quantum walks}.
We provide evidence for this possibility using quantum walks that {\em
decohere} under repeated randomized measurements. We show: (a) decoherent
quantum walks always mix, just like their classical counterparts, (b) the
mixing time is a robust quantity, essentially invariant under any smooth form
of decoherence, and (c) the mixing time of the decoherent quantum walk on a
periodic lattice is , which is indeed
and is asymptotically no worse than the
diameter of (the obvious lower bound) up to at most a logarithmic
factor.Comment: 13 pages; v2 revised several part
Analysis of top to bottom- shuffles
A deck of cards is shuffled by repeatedly moving the top card to one of
the bottom positions uniformly at random. We give upper and lower bounds
on the total variation mixing time for this shuffle as ranges from a
constant to . We also consider a symmetric variant of this shuffle in which
at each step either the top card is randomly inserted into the bottom
positions or a random card from the bottom positions is moved to the top.
For this reversible shuffle we derive bounds on the mixing time. Finally,
we transfer mixing time estimates for the above shuffles to the lazy top to
bottom- walks that move with probability 1/2 at each step.Comment: Published at http://dx.doi.org/10.1214/10505160500000062 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
Distributed Averaging via Lifted Markov Chains
Motivated by applications of distributed linear estimation, distributed
control and distributed optimization, we consider the question of designing
linear iterative algorithms for computing the average of numbers in a network.
Specifically, our interest is in designing such an algorithm with the fastest
rate of convergence given the topological constraints of the network. As the
main result of this paper, we design an algorithm with the fastest possible
rate of convergence using a non-reversible Markov chain on the given network
graph. We construct such a Markov chain by transforming the standard Markov
chain, which is obtained using the Metropolis-Hastings method. We call this
novel transformation pseudo-lifting. We apply our method to graphs with
geometry, or graphs with doubling dimension. Specifically, the convergence time
of our algorithm (equivalently, the mixing time of our Markov chain) is
proportional to the diameter of the network graph and hence optimal. As a
byproduct, our result provides the fastest mixing Markov chain given the
network topological constraints, and should naturally find their applications
in the context of distributed optimization, estimation and control
- …