88 research outputs found

    On Distributed Averaging Algorithms and Quantization Effects

    Full text link
    We consider distributed iterative algorithms for the averaging problem over time-varying topologies. Our focus is on the convergence time of such algorithms when complete (unquantized) information is available, and on the degradation of performance when only quantized information is available. We study a large and natural class of averaging algorithms, which includes the vast majority of algorithms proposed to date, and provide tight polynomial bounds on their convergence time. We also describe an algorithm within this class whose convergence time is the best among currently available averaging algorithms for time-varying topologies. We then propose and analyze distributed averaging algorithms under the additional constraint that agents can only store and communicate quantized information, so that they can only converge to the average of the initial values of the agents within some error. We establish bounds on the error and tight bounds on the convergence time, as a function of the number of quantization levels

    On Convergence Rate of Scalar Hegselmann-Krause Dynamics

    Full text link
    In this work, we derive a new upper bound on the termination time of the Hegselmann-Krause model for opinion dynamics. Using a novel method, we show that the termination rate of this dynamics happens no longer than O(n3)O(n^3) which improves the best known upper bound of O(n4)O(n^4) by a factor of nn .Comment: 5 pages, 2 figures, submitted to The American Control Conference, Sep. 201

    An Upper Bound on the Convergence Time for Quantized Consensus of Arbitrary Static Graphs

    Full text link
    We analyze a class of distributed quantized consensus algorithms for arbitrary static networks. In the initial setting, each node in the network has an integer value. Nodes exchange their current estimate of the mean value in the network, and then update their estimation by communicating with their neighbors in a limited capacity channel in an asynchronous clock setting. Eventually, all nodes reach consensus with quantized precision. We analyze the expected convergence time for the general quantized consensus algorithm proposed by Kashyap et al \cite{Kashyap}. We use the theory of electric networks, random walks, and couplings of Markov chains to derive an O(N3logN)O(N^3\log N) upper bound for the expected convergence time on an arbitrary graph of size NN, improving on the state of art bound of O(N5)O(N^5) for quantized consensus algorithms. Our result is not dependent on graph topology. Example of complete graphs is given to show how to extend the analysis to graphs of given topology.Comment: to appear in IEEE Trans. on Automatic Control, January, 2015. arXiv admin note: substantial text overlap with arXiv:1208.078

    Exponentially Fast Parameter Estimation in Networks Using Distributed Dual Averaging

    Full text link
    In this paper we present an optimization-based view of distributed parameter estimation and observational social learning in networks. Agents receive a sequence of random, independent and identically distributed (i.i.d.) signals, each of which individually may not be informative about the underlying true state, but the signals together are globally informative enough to make the true state identifiable. Using an optimization-based characterization of Bayesian learning as proximal stochastic gradient descent (with Kullback-Leibler divergence from a prior as a proximal function), we show how to efficiently use a distributed, online variant of Nesterov's dual averaging method to solve the estimation with purely local information. When the true state is globally identifiable, and the network is connected, we prove that agents eventually learn the true parameter using a randomized gossip scheme. We demonstrate that with high probability the convergence is exponentially fast with a rate dependent on the KL divergence of observations under the true state from observations under the second likeliest state. Furthermore, our work also highlights the possibility of learning under continuous adaptation of network which is a consequence of employing constant, unit stepsize for the algorithm.Comment: 6 pages, To appear in Conference on Decision and Control 201
    corecore