467,205 research outputs found

    Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling

    Full text link
    The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent co-ordination, estimation in sensor networks, and large-scale optimization in machine learning. We develop and analyze distributed algorithms based on dual averaging of subgradients, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our method of analysis allows for a clear separation between the convergence of the optimization algorithm itself and the effects of communication constraints arising from the network structure. In particular, we show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network. The sharpness of this prediction is confirmed both by theoretical lower bounds and simulations for various networks. Our approach includes both the cases of deterministic optimization and communication, as well as problems with stochastic optimization and/or communication.Comment: 40 pages, 4 figure

    Non-Convex Distributed Optimization

    Full text link
    We study distributed non-convex optimization on a time-varying multi-agent network. Each node has access to its own smooth local cost function, and the collective goal is to minimize the sum of these functions. We generalize the results obtained previously to the case of non-convex functions. Under some additional technical assumptions on the gradients we prove the convergence of the distributed push-sum algorithm to some critical point of the objective function. By utilizing perturbations on the update process, we show the almost sure convergence of the perturbed dynamics to a local minimum of the global objective function. Our analysis shows that this noised procedure converges at a rate of O(1/t)O(1/t)

    Distributed Parameter Estimation via Pseudo-likelihood

    Full text link
    Estimating statistical models within sensor networks requires distributed algorithms, in which both data and computation are distributed across the nodes of the network. We propose a general approach for distributed learning based on combining local estimators defined by pseudo-likelihood components, encompassing a number of combination methods, and provide both theoretical and experimental analysis. We show that simple linear combination or max-voting methods, when combined with second-order information, are statistically competitive with more advanced and costly joint optimization. Our algorithms have many attractive properties including low communication and computational cost and "any-time" behavior.Comment: Appears in Proceedings of the 29th International Conference on Machine Learning (ICML 2012
    • …
    corecore