1,281 research outputs found
Fast Discrete Consensus Based on Gossip for Makespan Minimization in Networked Systems
In this paper we propose a novel algorithm to solve the discrete consensus problem, i.e., the problem of distributing evenly a set of tokens of arbitrary weight among the nodes of a networked system. Tokens are tasks to be executed by the nodes and the proposed distributed algorithm minimizes monotonically the makespan of the assigned tasks. The algorithm is based on gossip-like asynchronous local interactions between the nodes. The convergence time of the proposed algorithm is superior with respect to the state of the art of discrete and quantized consensus by at least a factor O(n) in both theoretical and empirical comparisons
On Endogenous Random Consensus and Averaging Dynamics
Motivated by various random variations of Hegselmann-Krause model for opinion
dynamics and gossip algorithm in an endogenously changing environment, we
propose a general framework for the study of endogenously varying random
averaging dynamics, i.e.\ an averaging dynamics whose evolution suffers from
history dependent sources of randomness. We show that under general assumptions
on the averaging dynamics, such dynamics is convergent almost surely. We also
determine the limiting behavior of such dynamics and show such dynamics admit
infinitely many time-varying Lyapunov functions
Exponentially Fast Parameter Estimation in Networks Using Distributed Dual Averaging
In this paper we present an optimization-based view of distributed parameter
estimation and observational social learning in networks. Agents receive a
sequence of random, independent and identically distributed (i.i.d.) signals,
each of which individually may not be informative about the underlying true
state, but the signals together are globally informative enough to make the
true state identifiable. Using an optimization-based characterization of
Bayesian learning as proximal stochastic gradient descent (with
Kullback-Leibler divergence from a prior as a proximal function), we show how
to efficiently use a distributed, online variant of Nesterov's dual averaging
method to solve the estimation with purely local information. When the true
state is globally identifiable, and the network is connected, we prove that
agents eventually learn the true parameter using a randomized gossip scheme. We
demonstrate that with high probability the convergence is exponentially fast
with a rate dependent on the KL divergence of observations under the true state
from observations under the second likeliest state. Furthermore, our work also
highlights the possibility of learning under continuous adaptation of network
which is a consequence of employing constant, unit stepsize for the algorithm.Comment: 6 pages, To appear in Conference on Decision and Control 201
Asynchrony and Acceleration in Gossip Algorithms
This paper considers the minimization of a sum of smooth and strongly convex
functions dispatched over the nodes of a communication network. Previous works
on the subject either focus on synchronous algorithms, which can be heavily
slowed down by a few slow nodes (the straggler problem), or consider a model of
asynchronous operation (Boyd et al., 2006) in which adjacent nodes communicate
at the instants of Poisson point processes. We have two main contributions. 1)
We propose CACDM (a Continuously Accelerated Coordinate Dual Method), and for
the Poisson model of asynchronous operation, we prove CACDM to converge to
optimality at an accelerated convergence rate in the sense of Nesterov et
Stich, 2017. In contrast, previously proposed asynchronous algorithms have not
been proven to achieve such accelerated rate. While CACDM is based on discrete
updates, the proof of its convergence crucially depends on a continuous time
analysis. 2) We introduce a new communication scheme based on Loss-Networks,
that is programmable in a fully asynchronous and decentralized way, unlike the
Poisson model of asynchronous operation that does not capture essential aspects
of asynchrony such as non-instantaneous communications and computations. Under
this Loss-Network model of asynchrony, we establish for CDM (a Coordinate Dual
Method) a rate of convergence in terms of the eigengap of the Laplacian of the
graph weighted by local effective delays. We believe this eigengap to be a
fundamental bottleneck for convergence rates of asynchronous optimization.
Finally, we verify empirically that CACDM enjoys an accelerated convergence
rate in the Loss-Network model of asynchrony
Distributed estimation over a low-cost sensor network: a review of state-of-the-art
Proliferation of low-cost, lightweight, and power efficient sensors and advances in networked systems enable the employment of multiple sensors. Distributed estimation provides a scalable and fault-robust fusion framework with a peer-to-peer communication architecture. For this reason, there seems to be a real need for a critical review of existing and, more importantly, recent advances in the domain of distributed estimation over a low-cost sensor network. This paper presents a comprehensive review of the state-of-the-art solutions in this research area, exploring their characteristics, advantages, and challenging issues. Additionally, several open problems and future avenues of research are highlighted
- …