107,053 research outputs found

    Convergence Speed of the Consensus Algorithm with Interference and Sparse Long-Range Connectivity

    Full text link
    We analyze the effect of interference on the convergence rate of average consensus algorithms, which iteratively compute the measurement average by message passing among nodes. It is usually assumed that these algorithms converge faster with a greater exchange of information (i.e., by increased network connectivity) in every iteration. However, when interference is taken into account, it is no longer clear if the rate of convergence increases with network connectivity. We study this problem for randomly-placed consensus-seeking nodes connected through an interference-limited network. We investigate the following questions: (a) How does the rate of convergence vary with increasing communication range of each node? and (b) How does this result change when each node is allowed to communicate with a few selected far-off nodes? When nodes schedule their transmissions to avoid interference, we show that the convergence speed scales with r2βˆ’dr^{2-d}, where rr is the communication range and dd is the number of dimensions. This scaling is the result of two competing effects when increasing rr: Increased schedule length for interference-free transmission vs. the speed gain due to improved connectivity. Hence, although one-dimensional networks can converge faster from a greater communication range despite increased interference, the two effects exactly offset one another in two-dimensions. In higher dimensions, increasing the communication range can actually degrade the rate of convergence. Our results thus underline the importance of factoring in the effect of interference in the design of distributed estimation algorithms.Comment: 27 pages, 4 figure

    Nested Distributed Gradient Methods with Adaptive Quantized Communication

    Full text link
    In this paper, we consider minimizing a sum of local convex objective functions in a distributed setting, where communication can be costly. We propose and analyze a class of nested distributed gradient methods with adaptive quantized communication (NEAR-DGD+Q). We show the effect of performing multiple quantized communication steps on the rate of convergence and on the size of the neighborhood of convergence, and prove R-Linear convergence to the exact solution with increasing number of consensus steps and adaptive quantization. We test the performance of the method, as well as some practical variants, on quadratic functions, and show the effects of multiple quantized communication steps in terms of iterations/gradient evaluations, communication and cost.Comment: 9 pages, 2 figures. arXiv admin note: text overlap with arXiv:1709.0299
    • …
    corecore