21,616 research outputs found

    DC-DistADMM: ADMM Algorithm for Contrained Distributed Optimization over Directed Graphs

    Full text link
    We present a distributed algorithm to solve a multi-agent optimization problem, where the global objective function is the sum nn convex objective functions. Our focus is on constrained problems where the agents' estimates are restricted to be in different convex sets. The interconnection topology among the nn agents has directed links and each agent ii can only communicate with agents in its neighborhood determined by a directed graph. In this article, we propose an algorithm called \underline{D}irected \underline{C}onstrained-\underline{Dist}ributed \underline{A}lternating \underline{D}irection \underline{M}ethod of \underline{M}ultipliers (DC-DistADMM) to solve the above multi-agent convex optimization problem. During every iteration of the DC-DistADMM algorithm, each agent solves a local convex optimization problem and utilizes a finite-time "approximate" consensus protocol to update its local estimate of the optimal solution. To the best of our knowledge the proposed algorithm is the first ADMM based algorithm to solve distributed multi-agent optimization problems in directed interconnection topologies with convergence guarantees. We show that in case of individual functions being convex and not-necessarily differentiable the proposed DC-DistADMM algorithm converges at a rate of O(1/k)O(1/k), where kk is the iteration counter. We further establish a linear rate of convergence for the DC-DistADMM algorithm when the global objective function is strongly convex and smooth. We numerically evaluate our proposed algorithm by solving a constrained distributed β„“1\ell_1-regularized logistic regression problem. Additionally, we provide a numerical comparison of the proposed DC-DistADMM algorithm with the other state-of-the-art algorithms in solving a distributed least squares problem to show the efficacy of the DC-DistADMM algorithm over the existing methods in the literature.Comment: 17 pages, 8 Figures, includes an appendi

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension
    • …
    corecore