2,437 research outputs found

    An Upper Bound on the Convergence Time for Quantized Consensus of Arbitrary Static Graphs

    Full text link
    We analyze a class of distributed quantized consensus algorithms for arbitrary static networks. In the initial setting, each node in the network has an integer value. Nodes exchange their current estimate of the mean value in the network, and then update their estimation by communicating with their neighbors in a limited capacity channel in an asynchronous clock setting. Eventually, all nodes reach consensus with quantized precision. We analyze the expected convergence time for the general quantized consensus algorithm proposed by Kashyap et al \cite{Kashyap}. We use the theory of electric networks, random walks, and couplings of Markov chains to derive an O(N3logN)O(N^3\log N) upper bound for the expected convergence time on an arbitrary graph of size NN, improving on the state of art bound of O(N5)O(N^5) for quantized consensus algorithms. Our result is not dependent on graph topology. Example of complete graphs is given to show how to extend the analysis to graphs of given topology.Comment: to appear in IEEE Trans. on Automatic Control, January, 2015. arXiv admin note: substantial text overlap with arXiv:1208.078

    Quantized Consensus by the Alternating Direction Method of Multipliers: Algorithms and Applications

    Get PDF
    Collaborative in-network processing is a major tenet in the fields of control, signal processing, information theory, and computer science. Agents operating in a coordinated fashion can gain greater efficiency and operational capability than those perform solo missions. In many such applications the central task is to compute the global average of agents\u27 data in a distributed manner. Much recent attention has been devoted to quantized consensus, where, due to practical constraints, only quantized communications are allowed between neighboring nodes in order to achieve the average consensus. This dissertation aims to develop efficient quantized consensus algorithms based on the alternating direction method of multipliers (ADMM) for networked applications, and in particular, consensus based detection in large scale sensor networks. We study the effects of two commonly used uniform quantization schemes, dithered and deterministic quantizations, on an ADMM based distributed averaging algorithm. With dithered quantization, this algorithm yields linear convergence to the desired average in the mean sense with a bounded variance. When deterministic quantization is employed, the distributed ADMM either converges to a consensus or cycles with a finite period after a finite-time iteration. In the cyclic case, local quantized variables have the same sample mean over one period and hence each node can also reach a consensus. We then obtain an upper bound on the consensus error, which depends only on the quantization resolution and the average degree of the network. This is preferred in large scale networks where the range of agents\u27 data and the size of network may be large. Noticing that existing quantized consensus algorithms, including the above two, adopt infinite-bit quantizers unless a bound on agents\u27 data is known a priori, we further develop an ADMM based quantized consensus algorithm using finite-bit bounded quantizers for possibly unbounded agents\u27 data. By picking a small enough ADMM step size, this algorithm can obtain the same consensus result as using the unbounded deterministic quantizer. We then apply this algorithm to distributed detection in connected sensor networks where each node can only exchange information with its direct neighbors. We establish that, with each node employing an identical one-bit quantizer for local information exchange, our approach achieves the optimal asymptotic performance of centralized detection. The statement is true under three different detection frameworks: the Bayesian criterion where the maximum a posteriori detector is optimal, the Neyman-Pearson criterion with a constant type-I error constraint, and the Neyman-Pearson criterion with an exponential type-I error constraint. The key to achieving optimal asymptotic performance is the use of a one-bit deterministic quantizer with controllable threshold that results in desired consensus error bounds

    Fast Discrete Consensus Based on Gossip for Makespan Minimization in Networked Systems

    Get PDF
    In this paper we propose a novel algorithm to solve the discrete consensus problem, i.e., the problem of distributing evenly a set of tokens of arbitrary weight among the nodes of a networked system. Tokens are tasks to be executed by the nodes and the proposed distributed algorithm minimizes monotonically the makespan of the assigned tasks. The algorithm is based on gossip-like asynchronous local interactions between the nodes. The convergence time of the proposed algorithm is superior with respect to the state of the art of discrete and quantized consensus by at least a factor O(n) in both theoretical and empirical comparisons

    Quantized Consensus ADMM for Multi-Agent Distributed Optimization

    Get PDF
    Multi-agent distributed optimization over a network minimizes a global objective formed by a sum of local convex functions using only local computation and communication. We develop and analyze a quantized distributed algorithm based on the alternating direction method of multipliers (ADMM) when inter-agent communications are subject to finite capacity and other practical constraints. While existing quantized ADMM approaches only work for quadratic local objectives, the proposed algorithm can deal with more general objective functions (possibly non-smooth) including the LASSO. Under certain convexity assumptions, our algorithm converges to a consensus within log1+ηΩ\log_{1+\eta}\Omega iterations, where η>0\eta>0 depends on the local objectives and the network topology, and Ω\Omega is a polynomial determined by the quantization resolution, the distance between initial and optimal variable values, the local objective functions and the network topology. A tight upper bound on the consensus error is also obtained which does not depend on the size of the network.Comment: 30 pages, 4 figures; to be submitted to IEEE Trans. Signal Processing. arXiv admin note: text overlap with arXiv:1307.5561 by other author

    Design and Analysis of Distributed Averaging with Quantized Communication

    Get PDF
    Consider a network whose nodes have some initial values, and it is desired to design an algorithm that builds on neighbor to neighbor interactions with the ultimate goal of convergence to the average of all initial node values or to some value close to that average. Such an algorithm is called generically "distributed averaging," and our goal in this paper is to study the performance of a subclass of deterministic distributed averaging algorithms where the information exchange between neighboring nodes (agents) is subject to uniform quantization. With such quantization, convergence to the precise average cannot be achieved in general, but the convergence would be to some value close to it, called quantized consensus. Using Lyapunov stability analysis, we characterize the convergence properties of the resulting nonlinear quantized system. We show that in finite time and depending on initial conditions, the algorithm will either cause all agents to reach a quantized consensus where the consensus value is the largest quantized value not greater than the average of their initial values, or will lead all variables to cycle in a small neighborhood around the average. In the latter case, we identify tight bounds for the size of the neighborhood and we further show that the error can be made arbitrarily small by adjusting the algorithm's parameters in a distributed manner
    corecore