427 research outputs found

    Newton-Raphson Consensus for Distributed Convex Optimization

    Full text link
    We address the problem of distributed uncon- strained convex optimization under separability assumptions, i.e., the framework where each agent of a network is endowed with a local private multidimensional convex cost, is subject to communication constraints, and wants to collaborate to compute the minimizer of the sum of the local costs. We propose a design methodology that combines average consensus algorithms and separation of time-scales ideas. This strategy is proved, under suitable hypotheses, to be globally convergent to the true minimizer. Intuitively, the procedure lets the agents distributedly compute and sequentially update an approximated Newton- Raphson direction by means of suitable average consensus ratios. We show with numerical simulations that the speed of convergence of this strategy is comparable with alternative optimization strategies such as the Alternating Direction Method of Multipliers. Finally, we propose some alternative strategies which trade-off communication and computational requirements with convergence speed.Comment: 18 pages, preprint with proof

    A Consensus Approach to Distributed Convex Optimization in Multi-Agent Systems

    Get PDF
    In this thesis we address the problem of distributed unconstrained convex optimization under separability assumptions, i.e., the framework where a network of agents, each endowed with local private convex cost and subject to communication constraints, wants to collaborate to compute the minimizer of the sum of the local costs. We propose a design methodology that combines average consensus algorithms and separation of time-scales ideas. This strategy is proven, under suitable hypotheses, to be globally convergent to the true minimizer. Intuitively, the procedure lets the agents distributedly compute and sequentially update an approximated Newton-Raphson direction by means of suitable average consensus ratios. We consider both a scalar and a multidimensional scenario of the Synchronous Newton-Raphson Consensus, proposing some alternative strategies which trade-off communication and computational requirements with convergence speed. We provide analytical proofs of convergence and we show with numerical simulations that the speed of convergence of this strategy is comparable with alternative optimization strategies such as the Alternating Direction Method of Multipliers, the Distributed Subgradient Method and Distributed Control Method. Moreover, we consider the convergence rates of the Synchronous Newton-Raphson Consensus and the Gradient Descent Consensus under the simplificative assumption of quadratic local cost functions. We derive sufficient conditions which guarantee the convergence of the algorithms. From these conditions we then obtain closed form expressions that can be used to tune the parameters for maximizing the rate of convergence. Despite these formulas have been derived under quadratic local cost functions assumptions, they can be used as rules-of-thumb for tuning the parameters of the algorithms. Finally, we propose an asynchronous version of the Newton-Raphson Consensus. Beside having low computational complexity, low communication requirements and being interpretable as a distributed Newton-Raphson algorithm, the technique has also the beneficial properties of requiring very little coordination and naturally supporting time-varying topologies. Again, we analytically prove that under some assumptions it shows either local or global convergence properties. Through numerical simulations we corroborate these results and we compare the performance of the Asynchronous Newton-Raphson Consensus with other distributed optimization methods
    corecore