27 research outputs found

    Expander Graph and Communication-Efficient Decentralized Optimization

    Full text link
    In this paper, we discuss how to design the graph topology to reduce the communication complexity of certain algorithms for decentralized optimization. Our goal is to minimize the total communication needed to achieve a prescribed accuracy. We discover that the so-called expander graphs are near-optimal choices. We propose three approaches to construct expander graphs for different numbers of nodes and node degrees. Our numerical results show that the performance of decentralized optimization is significantly better on expander graphs than other regular graphs.Comment: 2016 IEEE Asilomar Conference on Signals, Systems, and Computer

    Chebyshev Polynomial Approximation for Distributed Signal Processing

    Get PDF
    Unions of graph Fourier multipliers are an important class of linear operators for processing signals defined on graphs. We present a novel method to efficiently distribute the application of these operators to the high-dimensional signals collected by sensor networks. The proposed method features approximations of the graph Fourier multipliers by shifted Chebyshev polynomials, whose recurrence relations make them readily amenable to distributed computation. We demonstrate how the proposed method can be used in a distributed denoising task, and show that the communication requirements of the method scale gracefully with the size of the network.Comment: 8 pages, 5 figures, to appear in the Proceedings of the IEEE International Conference on Distributed Computing in Sensor Systems (DCOSS), June, 2011, Barcelona, Spai

    On the Linear Convergence of the ADMM in Decentralized Consensus Optimization

    Full text link
    In decentralized consensus optimization, a connected network of agents collaboratively minimize the sum of their local objective functions over a common decision variable, where their information exchange is restricted between the neighbors. To this end, one can first obtain a problem reformulation and then apply the alternating direction method of multipliers (ADMM). The method applies iterative computation at the individual agents and information exchange between the neighbors. This approach has been observed to converge quickly and deemed powerful. This paper establishes its linear convergence rate for decentralized consensus optimization problem with strongly convex local objective functions. The theoretical convergence rate is explicitly given in terms of the network topology, the properties of local objective functions, and the algorithm parameter. This result is not only a performance guarantee but also a guideline toward accelerating the ADMM convergence.Comment: 11 figures, IEEE Transactions on Signal Processing, 201

    The Statistical Performance of Collaborative Inference

    Get PDF
    The statistical analysis of massive and complex data sets will require the development of algorithms that depend on distributed computing and collaborative inference. Inspired by this, we propose a collaborative framework that aims to estimate the unknown mean θ\theta of a random variable XX. In the model we present, a certain number of calculation units, distributed across a communication network represented by a graph, participate in the estimation of θ\theta by sequentially receiving independent data from XX while exchanging messages via a stochastic matrix AA defined over the graph. We give precise conditions on the matrix AA under which the statistical precision of the individual units is comparable to that of a (gold standard) virtual centralized estimate, even though each unit does not have access to all of the data. We show in particular the fundamental role played by both the non-trivial eigenvalues of AA and the Ramanujan class of expander graphs, which provide remarkable performance for moderate algorithmic cost

    Distributed multi-agent Gaussian regression via finite-dimensional approximations

    Full text link
    We consider the problem of distributedly estimating Gaussian processes in multi-agent frameworks. Each agent collects few measurements and aims to collaboratively reconstruct a common estimate based on all data. Agents are assumed with limited computational and communication capabilities and to gather MM noisy measurements in total on input locations independently drawn from a known common probability density. The optimal solution would require agents to exchange all the MM input locations and measurements and then invert an M×MM \times M matrix, a non-scalable task. Differently, we propose two suboptimal approaches using the first EE orthonormal eigenfunctions obtained from the \ac{KL} expansion of the chosen kernel, where typically E≪ME \ll M. The benefits are that the computation and communication complexities scale with EE and not with MM, and computing the required statistics can be performed via standard average consensus algorithms. We obtain probabilistic non-asymptotic bounds that determine a priori the desired level of estimation accuracy, and new distributed strategies relying on Stein's unbiased risk estimate (SURE) paradigms for tuning the regularization parameters and applicable to generic basis functions (thus not necessarily kernel eigenfunctions) and that can again be implemented via average consensus. The proposed estimators and bounds are finally tested on both synthetic and real field data

    Multi-hop Diffusion LMS for Energy-constrained Distributed Estimation

    Full text link
    We propose a multi-hop diffusion strategy for a sensor network to perform distributed least mean-squares (LMS) estimation under local and network-wide energy constraints. At each iteration of the strategy, each node can combine intermediate parameter estimates from nodes other than its physical neighbors via a multi-hop relay path. We propose a rule to select combination weights for the multi-hop neighbors, which can balance between the transient and the steady-state network mean-square deviations (MSDs). We study two classes of networks: simple networks with a unique transmission path from one node to another, and arbitrary networks utilizing diffusion consultations over at most two hops. We propose a method to optimize each node's information neighborhood subject to local energy budgets and a network-wide energy budget for each diffusion iteration. This optimization requires the network topology, and the noise and data variance profiles of each node, and is performed offline before the diffusion process. In addition, we develop a fully distributed and adaptive algorithm that approximately optimizes the information neighborhood of each node with only local energy budget constraints in the case where diffusion consultations are performed over at most a predefined number of hops. Numerical results suggest that our proposed multi-hop diffusion strategy achieves the same steady-state MSD as the existing one-hop adapt-then-combine diffusion algorithm but with a lower energy budget.Comment: 14 pages, 12 figures. Submitted for publicatio
    corecore