18 research outputs found

    Design and Analysis of Distributed Averaging with Quantized Communication

    Get PDF
    Consider a network whose nodes have some initial values, and it is desired to design an algorithm that builds on neighbor to neighbor interactions with the ultimate goal of convergence to the average of all initial node values or to some value close to that average. Such an algorithm is called generically "distributed averaging," and our goal in this paper is to study the performance of a subclass of deterministic distributed averaging algorithms where the information exchange between neighboring nodes (agents) is subject to uniform quantization. With such quantization, convergence to the precise average cannot be achieved in general, but the convergence would be to some value close to it, called quantized consensus. Using Lyapunov stability analysis, we characterize the convergence properties of the resulting nonlinear quantized system. We show that in finite time and depending on initial conditions, the algorithm will either cause all agents to reach a quantized consensus where the consensus value is the largest quantized value not greater than the average of their initial values, or will lead all variables to cycle in a small neighborhood around the average. In the latter case, we identify tight bounds for the size of the neighborhood and we further show that the error can be made arbitrarily small by adjusting the algorithm's parameters in a distributed manner

    Distributed Average Consensus under Quantized Communication via Event-Triggered Mass Summation

    Full text link
    We study distributed average consensus problems in multi-agent systems with directed communication links that are subject to quantized information flow. The goal of distributed average consensus is for the nodes, each associated with some initial value, to obtain the average (or some value close to the average) of these initial values. In this paper, we present and analyze a distributed averaging algorithm which operates exclusively with quantized values (specifically, the information stored, processed and exchanged between neighboring agents is subject to deterministic uniform quantization) and relies on event-driven updates (e.g., to reduce energy consumption, communication bandwidth, network congestion, and/or processor usage). We characterize the properties of the proposed distributed averaging protocol on quantized values and show that its execution, on any time-invariant and strongly connected digraph, will allow all agents to reach, in finite time, a common consensus value represented as the ratio of two integer that is equal to the exact average. We conclude with examples that illustrate the operation, performance, and potential advantages of the proposed algorithm

    Robust and Communication-Efficient Collaborative Learning

    Get PDF
    We consider a decentralized learning problem, where a set of computing nodes aim at solving a non-convex optimization problem collaboratively. It is well-known that decentralized optimization schemes face two major system bottlenecks: stragglers' delay and communication overhead. In this paper, we tackle these bottlenecks by proposing a novel decentralized and gradient-based optimization algorithm named as QuanTimed-DSGD. Our algorithm stands on two main ideas: (i) we impose a deadline on the local gradient computations of each node at each iteration of the algorithm, and (ii) the nodes exchange quantized versions of their local models. The first idea robustifies to straggling nodes and the second alleviates communication efficiency. The key technical contribution of our work is to prove that with non-vanishing noises for quantization and stochastic gradients, the proposed method exactly converges to the global optimal for convex loss functions, and finds a first-order stationary point in non-convex scenarios. Our numerical evaluations of the QuanTimed-DSGD on training benchmark datasets, MNIST and CIFAR-10, demonstrate speedups of up to 3x in run-time, compared to state-of-the-art decentralized optimization methods
    corecore