1,878 research outputs found

    A Partition-Based Implementation of the Relaxed ADMM for Distributed Convex Optimization over Lossy Networks

    Full text link
    In this paper we propose a distributed implementation of the relaxed Alternating Direction Method of Multipliers algorithm (R-ADMM) for optimization of a separable convex cost function, whose terms are stored by a set of interacting agents, one for each agent. Specifically the local cost stored by each node is in general a function of both the state of the node and the states of its neighbors, a framework that we refer to as `partition-based' optimization. This framework presents a great flexibility and can be adapted to a large number of different applications. We show that the partition-based R-ADMM algorithm we introduce is linked to the relaxed Peaceman-Rachford Splitting (R-PRS) operator which, historically, has been introduced in the literature to find the zeros of sum of functions. Interestingly, making use of non expansive operator theory, the proposed algorithm is shown to be provably robust against random packet losses that might occur in the communication between neighboring nodes. Finally, the effectiveness of the proposed algorithm is confirmed by a set of compelling numerical simulations run over random geometric graphs subject to i.i.d. random packet losses.Comment: Full version of the paper to be presented at Conference on Decision and Control (CDC) 201

    Asynchronous Distributed ADMM for Large-Scale Optimization- Part I: Algorithm and Convergence Analysis

    Get PDF
    Aiming at solving large-scale learning problems, this paper studies distributed optimization methods based on the alternating direction method of multipliers (ADMM). By formulating the learning problem as a consensus problem, the ADMM can be used to solve the consensus problem in a fully parallel fashion over a computer network with a star topology. However, traditional synchronized computation does not scale well with the problem size, as the speed of the algorithm is limited by the slowest workers. This is particularly true in a heterogeneous network where the computing nodes experience different computation and communication delays. In this paper, we propose an asynchronous distributed ADMM (AD-AMM) which can effectively improve the time efficiency of distributed optimization. Our main interest lies in analyzing the convergence conditions of the AD-ADMM, under the popular partially asynchronous model, which is defined based on a maximum tolerable delay of the network. Specifically, by considering general and possibly non-convex cost functions, we show that the AD-ADMM is guaranteed to converge to the set of Karush-Kuhn-Tucker (KKT) points as long as the algorithm parameters are chosen appropriately according to the network delay. We further illustrate that the asynchrony of the ADMM has to be handled with care, as slightly modifying the implementation of the AD-ADMM can jeopardize the algorithm convergence, even under a standard convex setting.Comment: 37 page

    Asynchronous Distributed Optimization over Lossy Networks via Relaxed ADMM: Stability and Linear Convergence

    Full text link
    In this work we focus on the problem of minimizing the sum of convex cost functions in a distributed fashion over a peer-to-peer network. In particular, we are interested in the case in which communications between nodes are prone to failures and the agents are not synchronized among themselves. We address the problem proposing a modified version of the relaxed ADMM, which corresponds to the Peaceman-Rachford splitting method applied to the dual. By exploiting results from operator theory, we are able to prove the almost sure convergence of the proposed algorithm under general assumptions on the distribution of communication loss and node activation events. By further assuming the cost functions to be strongly convex, we prove the linear convergence of the algorithm in mean to a neighborhood of the optimal solution, and provide an upper bound to the convergence rate. Finally, we present numerical results testing the proposed method in different scenarios.Comment: To appear in IEEE Transactions on Automatic Contro

    Asynchronous Distributed ADMM for Large-Scale Optimization- Part II: Linear Convergence Analysis and Numerical Performance

    Get PDF
    The alternating direction method of multipliers (ADMM) has been recognized as a versatile approach for solving modern large-scale machine learning and signal processing problems efficiently. When the data size and/or the problem dimension is large, a distributed version of ADMM can be used, which is capable of distributing the computation load and the data set to a network of computing nodes. Unfortunately, a direct synchronous implementation of such algorithm does not scale well with the problem size, as the algorithm speed is limited by the slowest computing nodes. To address this issue, in a companion paper, we have proposed an asynchronous distributed ADMM (AD-ADMM) and studied its worst-case convergence conditions. In this paper, we further the study by characterizing the conditions under which the AD-ADMM achieves linear convergence. Our conditions as well as the resulting linear rates reveal the impact that various algorithm parameters, network delay and network size have on the algorithm performance. To demonstrate the superior time efficiency of the proposed AD-ADMM, we test the AD-ADMM on a high-performance computer cluster by solving a large-scale logistic regression problem.Comment: submitted for publication, 28 page

    A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization

    Get PDF
    Based on the idea of randomized coordinate descent of α\alpha-averaged operators, a randomized primal-dual optimization algorithm is introduced, where a random subset of coordinates is updated at each iteration. The algorithm builds upon a variant of a recent (deterministic) algorithm proposed by V\~u and Condat that includes the well known ADMM as a particular case. The obtained algorithm is used to solve asynchronously a distributed optimization problem. A network of agents, each having a separate cost function containing a differentiable term, seek to find a consensus on the minimum of the aggregate objective. The method yields an algorithm where at each iteration, a random subset of agents wake up, update their local estimates, exchange some data with their neighbors, and go idle. Numerical results demonstrate the attractive performance of the method. The general approach can be naturally adapted to other situations where coordinate descent convex optimization algorithms are used with a random choice of the coordinates.Comment: 10 page
    • …
    corecore