37,993 research outputs found
Approximate Consensus in Highly Dynamic Networks: The Role of Averaging Algorithms
In this paper, we investigate the approximate consensus problem in highly
dynamic networks in which topology may change continually and unpredictably. We
prove that in both synchronous and partially synchronous systems, approximate
consensus is solvable if and only if the communication graph in each round has
a rooted spanning tree, i.e., there is a coordinator at each time. The striking
point in this result is that the coordinator is not required to be unique and
can change arbitrarily from round to round. Interestingly, the class of
averaging algorithms, which are memoryless and require no process identifiers,
entirely captures the solvability issue of approximate consensus in that the
problem is solvable if and only if it can be solved using any averaging
algorithm. Concerning the time complexity of averaging algorithms, we show that
approximate consensus can be achieved with precision of in a
coordinated network model in synchronous
rounds, and in rounds when
the maximum round delay for a message to be delivered is . While in
general, an upper bound on the time complexity of averaging algorithms has to
be exponential, we investigate various network models in which this exponential
bound in the number of nodes reduces to a polynomial bound. We apply our
results to networked systems with a fixed topology and classical benign fault
models, and deduce both known and new results for approximate consensus in
these systems. In particular, we show that for solving approximate consensus, a
complete network can tolerate up to 2n-3 arbitrarily located link faults at
every round, in contrast with the impossibility result established by Santoro
and Widmayer (STACS '89) showing that exact consensus is not solvable with n-1
link faults per round originating from the same node
Location-Aided Fast Distributed Consensus in Wireless Networks
Existing works on distributed consensus explore linear iterations based on
reversible Markov chains, which contribute to the slow convergence of the
algorithms. It has been observed that by overcoming the diffusive behavior of
reversible chains, certain nonreversible chains lifted from reversible ones mix
substantially faster than the original chains. In this paper, we investigate
the idea of accelerating distributed consensus via lifting Markov chains, and
propose a class of Location-Aided Distributed Averaging (LADA) algorithms for
wireless networks, where nodes' coarse location information is used to
construct nonreversible chains that facilitate distributed computing and
cooperative processing. First, two general pseudo-algorithms are presented to
illustrate the notion of distributed averaging through chain-lifting. These
pseudo-algorithms are then respectively instantiated through one LADA algorithm
on grid networks, and one on general wireless networks. For a grid
network, the proposed LADA algorithm achieves an -averaging time of
. Based on this algorithm, in a wireless network with
transmission range , an -averaging time of
can be attained through a centralized algorithm.
Subsequently, we present a fully-distributed LADA algorithm for wireless
networks, which utilizes only the direction information of neighbors to
construct nonreversible chains. It is shown that this distributed LADA
algorithm achieves the same scaling law in averaging time as the centralized
scheme. Finally, we propose a cluster-based LADA (C-LADA) algorithm, which,
requiring no central coordination, provides the additional benefit of reduced
message complexity compared with the distributed LADA algorithm.Comment: 44 pages, 14 figures. Submitted to IEEE Transactions on Information
Theor
Consensus and Products of Random Stochastic Matrices: Exact Rate for Convergence in Probability
Distributed consensus and other linear systems with system stochastic
matrices emerge in various settings, like opinion formation in social
networks, rendezvous of robots, and distributed inference in sensor networks.
The matrices are often random, due to, e.g., random packet dropouts in
wireless sensor networks. Key in analyzing the performance of such systems is
studying convergence of matrix products . In this paper, we
find the exact exponential rate for the convergence in probability of the
product of such matrices when time grows large, under the assumption that
the 's are symmetric and independent identically distributed in time.
Further, for commonly used random models like with gossip and link failure, we
show that the rate is found by solving a min-cut problem and, hence, easily
computable. Finally, we apply our results to optimally allocate the sensors'
transmission power in consensus+innovations distributed detection
Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
The goal of decentralized optimization over a network is to optimize a global
objective formed by a sum of local (possibly nonsmooth) convex functions using
only local computation and communication. It arises in various application
domains, including distributed tracking and localization, multi-agent
co-ordination, estimation in sensor networks, and large-scale optimization in
machine learning. We develop and analyze distributed algorithms based on dual
averaging of subgradients, and we provide sharp bounds on their convergence
rates as a function of the network size and topology. Our method of analysis
allows for a clear separation between the convergence of the optimization
algorithm itself and the effects of communication constraints arising from the
network structure. In particular, we show that the number of iterations
required by our algorithm scales inversely in the spectral gap of the network.
The sharpness of this prediction is confirmed both by theoretical lower bounds
and simulations for various networks. Our approach includes both the cases of
deterministic optimization and communication, as well as problems with
stochastic optimization and/or communication.Comment: 40 pages, 4 figure
- …