20 research outputs found

    A Partition-Based Implementation of the Relaxed ADMM for Distributed Convex Optimization over Lossy Networks

    Full text link
    In this paper we propose a distributed implementation of the relaxed Alternating Direction Method of Multipliers algorithm (R-ADMM) for optimization of a separable convex cost function, whose terms are stored by a set of interacting agents, one for each agent. Specifically the local cost stored by each node is in general a function of both the state of the node and the states of its neighbors, a framework that we refer to as `partition-based' optimization. This framework presents a great flexibility and can be adapted to a large number of different applications. We show that the partition-based R-ADMM algorithm we introduce is linked to the relaxed Peaceman-Rachford Splitting (R-PRS) operator which, historically, has been introduced in the literature to find the zeros of sum of functions. Interestingly, making use of non expansive operator theory, the proposed algorithm is shown to be provably robust against random packet losses that might occur in the communication between neighboring nodes. Finally, the effectiveness of the proposed algorithm is confirmed by a set of compelling numerical simulations run over random geometric graphs subject to i.i.d. random packet losses.Comment: Full version of the paper to be presented at Conference on Decision and Control (CDC) 201

    Compressed Distributed Gradient Descent: Communication-Efficient Consensus over Networks

    Get PDF
    Network consensus optimization has received increasing attention in recent years and has found important applications in many scientific and engineering fields. To solve network consensus optimization problems, one of the most well-known approaches is the distributed gradient descent method (DGD). However, in networks with slow communication rates, DGD's performance is unsatisfactory for solving high-dimensional network consensus problems due to the communication bottleneck. This motivates us to design a communication-efficient DGD-type algorithm based on compressed information exchanges. Our contributions in this paper are three-fold: i) We develop a communication-efficient algorithm called amplified-differential compression DGD (ADC-DGD) and show that it converges under {\em any} unbiased compression operator; ii) We rigorously prove the convergence performances of ADC-DGD and show that they match with those of DGD without compression; iii) We reveal an interesting phase transition phenomenon in the convergence speed of ADC-DGD. Collectively, our findings advance the state-of-the-art of network consensus optimization theory.Comment: 11 pages, 11 figures, IEEE INFOCOM 201

    Distributed Learning with Sparse Communications by Identification

    Full text link
    In distributed optimization for large-scale learning, a major performance limitation comes from the communications between the different entities. When computations are performed by workers on local data while a coordinator machine coordinates their updates to minimize a global loss, we present an asynchronous optimization algorithm that efficiently reduces the communications between the coordinator and workers. This reduction comes from a random sparsification of the local updates. We show that this algorithm converges linearly in the strongly convex case and also identifies optimal strongly sparse solutions. We further exploit this identification to propose an automatic dimension reduction, aptly sparsifying all exchanges between coordinator and workers.Comment: v2 is a significant improvement over v1 (titled "Asynchronous Distributed Learning with Sparse Communications and Identification") with new algorithms, results, and discussion

    Asynchronous Distributed Optimization over Lossy Networks via Relaxed ADMM: Stability and Linear Convergence

    Full text link
    In this work we focus on the problem of minimizing the sum of convex cost functions in a distributed fashion over a peer-to-peer network. In particular, we are interested in the case in which communications between nodes are prone to failures and the agents are not synchronized among themselves. We address the problem proposing a modified version of the relaxed ADMM, which corresponds to the Peaceman-Rachford splitting method applied to the dual. By exploiting results from operator theory, we are able to prove the almost sure convergence of the proposed algorithm under general assumptions on the distribution of communication loss and node activation events. By further assuming the cost functions to be strongly convex, we prove the linear convergence of the algorithm in mean to a neighborhood of the optimal solution, and provide an upper bound to the convergence rate. Finally, we present numerical results testing the proposed method in different scenarios.Comment: To appear in IEEE Transactions on Automatic Contro
    corecore