220 research outputs found

    Location-Aided Fast Distributed Consensus in Wireless Networks

    Full text link
    Existing works on distributed consensus explore linear iterations based on reversible Markov chains, which contribute to the slow convergence of the algorithms. It has been observed that by overcoming the diffusive behavior of reversible chains, certain nonreversible chains lifted from reversible ones mix substantially faster than the original chains. In this paper, we investigate the idea of accelerating distributed consensus via lifting Markov chains, and propose a class of Location-Aided Distributed Averaging (LADA) algorithms for wireless networks, where nodes' coarse location information is used to construct nonreversible chains that facilitate distributed computing and cooperative processing. First, two general pseudo-algorithms are presented to illustrate the notion of distributed averaging through chain-lifting. These pseudo-algorithms are then respectively instantiated through one LADA algorithm on grid networks, and one on general wireless networks. For a k×kk\times k grid network, the proposed LADA algorithm achieves an ϵ\epsilon-averaging time of O(klog(ϵ1))O(k\log(\epsilon^{-1})). Based on this algorithm, in a wireless network with transmission range rr, an ϵ\epsilon-averaging time of O(r1log(ϵ1))O(r^{-1}\log(\epsilon^{-1})) can be attained through a centralized algorithm. Subsequently, we present a fully-distributed LADA algorithm for wireless networks, which utilizes only the direction information of neighbors to construct nonreversible chains. It is shown that this distributed LADA algorithm achieves the same scaling law in averaging time as the centralized scheme. Finally, we propose a cluster-based LADA (C-LADA) algorithm, which, requiring no central coordination, provides the additional benefit of reduced message complexity compared with the distributed LADA algorithm.Comment: 44 pages, 14 figures. Submitted to IEEE Transactions on Information Theor

    Approximate Boltzmann Distributions for Nonreversible Markov Chains

    Full text link
    While powerful theories for the analysis of reversible Markov chains have enabled significant mathematical advances, nonequilibrium phenomena dominate the sciences and nonequilibrium chains do not enjoy the same formal foundations. For instance, the stationary distributions of reversible chains are fundamentally simpler than those of nonreversible chains because they are Boltzmann distributions -- they can be expressed in terms of a purely local "free energy" landscape, in analogy with equilibrium statistical physics. In general, it is impossible to similarly represent the steady states of nonequilibrium physical systems in a purely local way. However, a series of recent works on rattling theory (e.g., Chvykov et al., Science (2021)) provides strong evidence that a broad class of such systems nevertheless exhibit "approximate Boltzmann distributions," which allow some aspects of the global distributions to be inferred, at least approximately, from local information. We formalize the main claims of this physical theory to identify its hidden assumptions and demonstrate its basis in the theory of continuous-time Markov chains. To do so, we decompose an arbitrary stationary distribution π\pi into its "local" part -- the exit rates qq out of each state -- and its "global" part -- the stationary distribution ψ\psi of the embedded "jump" chain. We explain a variety of experimental results by showing that, for a random state, logπ\log \pi and logq-\log q are correlated to the extent that logψ\log \psi and logq-\log q are correlated or the ratio of their variances is small. In particular, the predictions of rattling theory apply when the global part of π\pi varies over fewer scales than its local part. We use this fact to demonstrate classes of nonreversible chains with stationary distributions that are exactly of Boltzmann type.Comment: 14 pages, 1 figur

    Role of current fluctuations in nonreversible samplers

    Full text link
    It is known that the distribution of nonreversible Markov processes breaking the detailed balance condition converges faster to the stationary distribution compared to reversible processes having the same stationary distribution. This is used in practice to accelerate Markov chain Monte Carlo algorithms that sample the Gibbs distribution by adding nonreversible transitions or non-gradient drift terms. The breaking of detailed balance also accelerates the convergence of empirical estimators to their ergodic expectation in the long-time limit. Here, we give a physical interpretation of this second form of acceleration in terms of currents associated with the fluctuations of empirical estimators using the level 2.5 of large deviations, which characterises the likelihood of density and current fluctuations in Markov processes. Focusing on diffusion processes, we show that there is accelerated convergence because estimator fluctuations arise in general with current fluctuations, leading to an added large deviation cost compared to the reversible case, which shows no current. We study the current fluctuation most likely to arise in conjunction with a given estimator fluctuation and provide bounds on the acceleration, based on approximations of this current. We illustrate these results for the Ornstein-Uhlenbeck process in two dimensions and the Brownian motion on the circle.Comment: v1: 14 pages, 2 figures. v2: minor corrections, close to published versio

    Near Optimal Bounds for Collision in Pollard Rho for Discrete Log

    Full text link
    We analyze a fairly standard idealization of Pollard's Rho algorithm for finding the discrete logarithm in a cyclic group G. It is found that, with high probability, a collision occurs in O(GlogGloglogG)O(\sqrt{|G|\log |G| \log \log |G|}) steps, not far from the widely conjectured value of Θ(G)\Theta(\sqrt{|G|}). This improves upon a recent result of Miller--Venkatesan which showed an upper bound of O(Glog3G)O(\sqrt{|G|}\log^3 |G|). Our proof is based on analyzing an appropriate nonreversible, non-lazy random walk on a discrete cycle of (odd) length |G|, and showing that the mixing time of the corresponding walk is O(logGloglogG)O(\log |G| \log \log |G|)

    Estimating the Sampling Error: Distribution of Transition Matrices and Functions of Transition Matrices for Given Trajectory Data

    Get PDF
    The problem of estimating a Markov transition matrix to statistically describe the dynamics underlying an observed process is frequently found in the physical and economical sciences. However, little attention has been paid to the fact that such an estimation is associated with statistical uncertainty, which depends on the number of observed transitions between metastable states. In turn, this induces uncertainties in any property computed from the transition matrix, such as stationary probabilities, committor probabilities, or eigenvalues. Assessing these uncertainties is essential for testing the reliability of a given observation and also, if possible, to plan further simulations or measurements in such a way that the most serious uncertainties will be reduced with minimal effort. Here, a rigorous statistical method is proposed to approximate the complete statistical distribution of functions of the transition matrix provided that one can identify discrete states such that the transition process between them may be modeled with a memoryless jump process, i.e., Markov dynamics. The method is based on sampling the statistical distribution of Markov transition matrices that is induced by the observed transition events. It allows the constraint of reversibility to be included, which is physically meaningful in many applications. The method is illustrated on molecular dynamics simulations of a hexapeptide that are modeled by a Markov transition process between the metastable states. For this model the distributions and uncertainties of the stationary probabilities of metastable states, the transition matrix elements, the committor probabilities, and the transition matrix eigenvalues are estimated. It is found that the detailed balance constraint can significantly alter the distribution of some observables

    Compositional Approximate Markov Chain Aggregation for PEPA Models

    Get PDF

    Information-Preserving Markov Aggregation

    Full text link
    We present a sufficient condition for a non-injective function of a Markov chain to be a second-order Markov chain with the same entropy rate as the original chain. This permits an information-preserving state space reduction by merging states or, equivalently, lossless compression of a Markov source on a sample-by-sample basis. The cardinality of the reduced state space is bounded from below by the node degrees of the transition graph associated with the original Markov chain. We also present an algorithm listing all possible information-preserving state space reductions, for a given transition graph. We illustrate our results by applying the algorithm to a bi-gram letter model of an English text.Comment: 7 pages, 3 figures, 2 table

    Ergodicity of the zigzag process

    Get PDF
    The zigzag process is a Piecewise Deterministic Markov Process which can be used in a MCMC framework to sample from a given target distribution. We prove the convergence of this process to its target under very weak assumptions, and establish a central limit theorem for empirical averages under stronger assumptions on the decay of the target measure. We use the classical "Meyn-Tweedie" approach. The main difficulty turns out to be the proof that the process can indeed reach all the points in the space, even if we consider the minimal switching rates
    corecore