1,069 research outputs found

    Infinite dimensional entangled Markov chains

    Get PDF
    We continue the analysis of nontrivial examples of quantum Markov processes. This is done by applying the construction of entangled Markov chains obtained from classical Markov chains with infinite state--space. The formula giving the joint correlations arises from the corresponding classical formula by replacing the usual matrix multiplication by the Schur multiplication. In this way, we provide nontrivial examples of entangled Markov chains on JZˉJFˉC\bar{\cup_{J\subset Z} \bar{\otimes}_{J}F}^{C^{*}}, FF being any infinite dimensional type II factor, JJ a finite interval of ZZ, and the bar the von Neumann tensor product between von Neumann algebras. We then have new nontrivial examples of quantum random walks which could play a r\^ole in quantum information theory. In view of applications to quantum statistical mechanics too, we see that the ergodic type of an entangled Markov chain is completely determined by the corresponding ergodic type of the underlying classical chain, provided that the latter admits an invariant probability distribution. This result parallels the corresponding one relative to the finite dimensional case. Finally, starting from random walks on discrete ICC groups, we exhibit examples of quantum Markov processes based on type II1II_1 von Neumann factors.Comment: 16 page

    Mixing Time of the Rudvalis Shuffle

    Full text link
    We extend a technique for lower-bounding the mixing time of card-shuffling Markov chains, and use it to bound the mixing time of the Rudvalis Markov chain, as well as two variants considered by Diaconis and Saloff-Coste. We show that in each case Theta(n^3 log n) shuffles are required for the permutation to randomize, which matches (up to constants) previously known upper bounds. In contrast, for the two variants, the mixing time of an individual card is only Theta(n^2) shuffles.Comment: 9 page

    Quantum speedup of classical mixing processes

    Get PDF
    Most approximation algorithms for #P-complete problems (e.g., evaluating the permanent of a matrix or the volume of a polytope) work by reduction to the problem of approximate sampling from a distribution π\pi over a large set §\S. This problem is solved using the {\em Markov chain Monte Carlo} method: a sparse, reversible Markov chain PP on §\S with stationary distribution π\pi is run to near equilibrium. The running time of this random walk algorithm, the so-called {\em mixing time} of PP, is O(δ1log1/π)O(\delta^{-1} \log 1/\pi_*) as shown by Aldous, where δ\delta is the spectral gap of PP and π\pi_* is the minimum value of π\pi. A natural question is whether a speedup of this classical method to O(δ1log1/π)O(\sqrt{\delta^{-1}} \log 1/\pi_*), the diameter of the graph underlying PP, is possible using {\em quantum walks}. We provide evidence for this possibility using quantum walks that {\em decohere} under repeated randomized measurements. We show: (a) decoherent quantum walks always mix, just like their classical counterparts, (b) the mixing time is a robust quantity, essentially invariant under any smooth form of decoherence, and (c) the mixing time of the decoherent quantum walk on a periodic lattice Znd\Z_n^d is O(ndlogd)O(n d \log d), which is indeed O(δ1log1/π)O(\sqrt{\delta^{-1}} \log 1/\pi_*) and is asymptotically no worse than the diameter of Znd\Z_n^d (the obvious lower bound) up to at most a logarithmic factor.Comment: 13 pages; v2 revised several part

    Analysis of top to bottom-kk shuffles

    Full text link
    A deck of nn cards is shuffled by repeatedly moving the top card to one of the bottom knk_n positions uniformly at random. We give upper and lower bounds on the total variation mixing time for this shuffle as knk_n ranges from a constant to nn. We also consider a symmetric variant of this shuffle in which at each step either the top card is randomly inserted into the bottom knk_n positions or a random card from the bottom knk_n positions is moved to the top. For this reversible shuffle we derive bounds on the L2L^2 mixing time. Finally, we transfer mixing time estimates for the above shuffles to the lazy top to bottom-kk walks that move with probability 1/2 at each step.Comment: Published at http://dx.doi.org/10.1214/10505160500000062 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Distributed Averaging via Lifted Markov Chains

    Full text link
    Motivated by applications of distributed linear estimation, distributed control and distributed optimization, we consider the question of designing linear iterative algorithms for computing the average of numbers in a network. Specifically, our interest is in designing such an algorithm with the fastest rate of convergence given the topological constraints of the network. As the main result of this paper, we design an algorithm with the fastest possible rate of convergence using a non-reversible Markov chain on the given network graph. We construct such a Markov chain by transforming the standard Markov chain, which is obtained using the Metropolis-Hastings method. We call this novel transformation pseudo-lifting. We apply our method to graphs with geometry, or graphs with doubling dimension. Specifically, the convergence time of our algorithm (equivalently, the mixing time of our Markov chain) is proportional to the diameter of the network graph and hence optimal. As a byproduct, our result provides the fastest mixing Markov chain given the network topological constraints, and should naturally find their applications in the context of distributed optimization, estimation and control
    corecore