8 research outputs found
The Best Mixing Time for Random Walks on Trees
We characterize the extremal structures for mixing walks on trees that start
from the most advantageous vertex. Let be a tree with stationary
distribution . For a vertex , let denote the expected
length of an optimal stopping rule from to . The \emph{best mixing
time} for is . We show that among all trees with
, the best mixing time is minimized uniquely by the star. For even ,
the best mixing time is maximized by the uniquely path. Surprising, for odd
, the best mixing time is maximized uniquely by a path of length with
a single leaf adjacent to one central vertex.Comment: 25 pages, 7 figures, 3 table
Location-Aided Fast Distributed Consensus in Wireless Networks
Existing works on distributed consensus explore linear iterations based on
reversible Markov chains, which contribute to the slow convergence of the
algorithms. It has been observed that by overcoming the diffusive behavior of
reversible chains, certain nonreversible chains lifted from reversible ones mix
substantially faster than the original chains. In this paper, we investigate
the idea of accelerating distributed consensus via lifting Markov chains, and
propose a class of Location-Aided Distributed Averaging (LADA) algorithms for
wireless networks, where nodes' coarse location information is used to
construct nonreversible chains that facilitate distributed computing and
cooperative processing. First, two general pseudo-algorithms are presented to
illustrate the notion of distributed averaging through chain-lifting. These
pseudo-algorithms are then respectively instantiated through one LADA algorithm
on grid networks, and one on general wireless networks. For a grid
network, the proposed LADA algorithm achieves an -averaging time of
. Based on this algorithm, in a wireless network with
transmission range , an -averaging time of
can be attained through a centralized algorithm.
Subsequently, we present a fully-distributed LADA algorithm for wireless
networks, which utilizes only the direction information of neighbors to
construct nonreversible chains. It is shown that this distributed LADA
algorithm achieves the same scaling law in averaging time as the centralized
scheme. Finally, we propose a cluster-based LADA (C-LADA) algorithm, which,
requiring no central coordination, provides the additional benefit of reduced
message complexity compared with the distributed LADA algorithm.Comment: 44 pages, 14 figures. Submitted to IEEE Transactions on Information
Theor
Dynamic Size Counting in Population Protocols
The population protocol model describes a network of anonymous agents that
interact asynchronously in pairs chosen at random. Each agent starts in the
same initial state . We introduce the *dynamic size counting* problem:
approximately counting the number of agents in the presence of an adversary who
at any time can remove any number of agents or add any number of new agents in
state . A valid solution requires that after each addition/removal event,
resulting in population size , with high probability each agent "quickly"
computes the same constant-factor estimate of the value (how quickly
is called the *convergence* time), which remains the output of every agent for
as long as possible (the *holding* time). Since the adversary can remove
agents, the holding time is necessarily finite: even after the adversary stops
altering the population, it is impossible to *stabilize* to an output that
never again changes.
We first show that a protocol solves the dynamic size counting problem if and
only if it solves the *loosely-stabilizing counting* problem: that of
estimating in a *fixed-size* population, but where the adversary can
initialize each agent in an arbitrary state, with the same convergence time and
holding time. We then show a protocol solving the loosely-stabilizing counting
problem with the following guarantees: if the population size is , is
the largest initial estimate of , and s is the maximum integer
initially stored in any field of the agents' memory, we have expected
convergence time , expected polynomial holding time, and
expected memory usage of bits. Interpreted as
a dynamic size counting protocol, when changing from population size
to , the convergence time is
Reversal of Markov Chains and the Forget Time
We study three quantities that can each be viewed as the time needed for a finite irreducible Markov chain to "forget" where it started. One of these is the mixing time, the minimum mean length of a stopping rule that yields the stationary distribution from the worst starting state. A second is the forget time, the minimum mean length of any stopping rule that yields the same distribution from any starting state. The third is the reset time, the minimum expected time between independent samples from the stationary distribution. Our main results state that the mixing time of a chain is equal to the mixing time of the time-reversed chain, while the forget time of a chain is equal to the reset time of the reverse chain. In particular, the forget time and the reset time of a time-reversible chain are equal. Moreover, the mixing time lies between absolute constant multiples of the sum of the forget time and the reset time. We also derive an explicit formula for the forget time, in terms o..