242 research outputs found

    Analysis of top to bottom-kk shuffles

    Full text link
    A deck of nn cards is shuffled by repeatedly moving the top card to one of the bottom knk_n positions uniformly at random. We give upper and lower bounds on the total variation mixing time for this shuffle as knk_n ranges from a constant to nn. We also consider a symmetric variant of this shuffle in which at each step either the top card is randomly inserted into the bottom knk_n positions or a random card from the bottom knk_n positions is moved to the top. For this reversible shuffle we derive bounds on the L2L^2 mixing time. Finally, we transfer mixing time estimates for the above shuffles to the lazy top to bottom-kk walks that move with probability 1/2 at each step.Comment: Published at http://dx.doi.org/10.1214/10505160500000062 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Location-Aided Fast Distributed Consensus in Wireless Networks

    Full text link
    Existing works on distributed consensus explore linear iterations based on reversible Markov chains, which contribute to the slow convergence of the algorithms. It has been observed that by overcoming the diffusive behavior of reversible chains, certain nonreversible chains lifted from reversible ones mix substantially faster than the original chains. In this paper, we investigate the idea of accelerating distributed consensus via lifting Markov chains, and propose a class of Location-Aided Distributed Averaging (LADA) algorithms for wireless networks, where nodes' coarse location information is used to construct nonreversible chains that facilitate distributed computing and cooperative processing. First, two general pseudo-algorithms are presented to illustrate the notion of distributed averaging through chain-lifting. These pseudo-algorithms are then respectively instantiated through one LADA algorithm on grid networks, and one on general wireless networks. For a kΓ—kk\times k grid network, the proposed LADA algorithm achieves an Ο΅\epsilon-averaging time of O(klog⁑(Ο΅βˆ’1))O(k\log(\epsilon^{-1})). Based on this algorithm, in a wireless network with transmission range rr, an Ο΅\epsilon-averaging time of O(rβˆ’1log⁑(Ο΅βˆ’1))O(r^{-1}\log(\epsilon^{-1})) can be attained through a centralized algorithm. Subsequently, we present a fully-distributed LADA algorithm for wireless networks, which utilizes only the direction information of neighbors to construct nonreversible chains. It is shown that this distributed LADA algorithm achieves the same scaling law in averaging time as the centralized scheme. Finally, we propose a cluster-based LADA (C-LADA) algorithm, which, requiring no central coordination, provides the additional benefit of reduced message complexity compared with the distributed LADA algorithm.Comment: 44 pages, 14 figures. Submitted to IEEE Transactions on Information Theor

    Spectral gap of nonreversible Markov chains

    Full text link
    We define the spectral gap of a Markov chain on a finite state space as the second-smallest singular value of the generator of the chain, generalizing the usual definition of spectral gap for reversible chains. We then define the relaxation time of the chain as the inverse of this spectral gap, and show that this relaxation time can be characterized, for any Markov chain, as the time required for convergence of empirical averages. This relaxation time is related to the Cheeger constant and the mixing time of the chain through inequalities that are similar to the reversible case, and the path argument can be used to get upper bounds. Several examples are worked out. An interesting finding from the examples is that the time for convergence of empirical averages in nonreversible chains can often be substantially smaller than the mixing time.Comment: 40 pages. Minor corrections and simplifications in this revisio

    Uniform Chernoff and Dvoretzky-Kiefer-Wolfowitz-type inequalities for Markov chains and related processes

    Full text link
    We observe that the technique of Markov contraction can be used to establish measure concentration for a broad class of non-contracting chains. In particular, geometric ergodicity provides a simple and versatile framework. This leads to a short, elementary proof of a general concentration inequality for Markov and hidden Markov chains (HMM), which supercedes some of the known results and easily extends to other processes such as Markov trees. As applications, we give a Dvoretzky-Kiefer-Wolfowitz-type inequality and a uniform Chernoff bound. All of our bounds are dimension-free and hold for countably infinite state spaces

    Non-reversible Metropolis-Hastings

    Get PDF
    The classical Metropolis-Hastings (MH) algorithm can be extended to generate non-reversible Markov chains. This is achieved by means of a modification of the acceptance probability, using the notion of vorticity matrix. The resulting Markov chain is non-reversible. Results from the literature on asymptotic variance, large deviations theory and mixing time are mentioned, and in the case of a large deviations result, adapted, to explain how non-reversible Markov chains have favorable properties in these respects. We provide an application of NRMH in a continuous setting by developing the necessary theory and applying, as first examples, the theory to Gaussian distributions in three and nine dimensions. The empirical autocorrelation and estimated asymptotic variance for NRMH applied to these examples show significant improvement compared to MH with identical stepsize.Comment: in Statistics and Computing, 201

    Near Optimal Bounds for Collision in Pollard Rho for Discrete Log

    Full text link
    We analyze a fairly standard idealization of Pollard's Rho algorithm for finding the discrete logarithm in a cyclic group G. It is found that, with high probability, a collision occurs in O(∣G∣log⁑∣G∣log⁑log⁑∣G∣)O(\sqrt{|G|\log |G| \log \log |G|}) steps, not far from the widely conjectured value of Θ(∣G∣)\Theta(\sqrt{|G|}). This improves upon a recent result of Miller--Venkatesan which showed an upper bound of O(∣G∣log⁑3∣G∣)O(\sqrt{|G|}\log^3 |G|). Our proof is based on analyzing an appropriate nonreversible, non-lazy random walk on a discrete cycle of (odd) length |G|, and showing that the mixing time of the corresponding walk is O(log⁑∣G∣log⁑log⁑∣G∣)O(\log |G| \log \log |G|)
    • …
    corecore