852 research outputs found

    Distributed and parallel methods for structural convex optimization

    Get PDF
    There has been considerable recent interest in optimization methods associated with a multi-agent network. The goal is to optimize a global objective function which is a sum of local objective functions only known by the agents through the network. The focus of this dissertation is the development of optimization algorithms for the special class when the optimization problem of interest has an additive or separable structure. Specifically, we are concerned with two classes of convex optimization problems. The first one is called as multi-agent convex problems and they arise in many network applications, including in-network estimation, machine learning and signal processing. The second one is termed as separable convex problems and they arise in diverse applications, including network resource allocation, and distributed model prediction and control. Due to the structure of problems and privacy of local objective functions, special optimization methods are always desirable, especially for large-scale structured problems. For the multi-agent convex problems with simple constraints, we develop gradient-free distributed methods based on the incremental and consensus strategies. The convergence analysis and convergence rates of proposed methods are provided. By comparison, existing distributed algorithms require first-order information of objective functions, but our methods only involve the estimates of objective function value. Therefore, the proposed methods are suitable to solve more general problems even when the first-order information of problems is unavailable or costly to compute. In practical applications a wide variety of problems are formulated as multi-agent optimization problems subject to equality and (or) inequality constraints. Methods available for solving this type of problems are still limited in the literature. Most of them are based on the Lagrangian duality, and there is no estimates on the convergence rate. In the thesis, we develop a distributed proximal-gradient method to solve multi-agent convex problems under global inequality constraints. Moreover, we provide the convergence analysis of the proposed method and obtain the explicit estimates of convergence rate. Our method relies on the exact penalty function method and multi-consensus averaging, not involving the Lagrangian multipliers. For the separable convex problems with linear constraints, on the framework of Lagrangian dual decomposition, we develop fast gradient-based optimization methods, including a fast dual gradient-projection method and a fast dual gradient method. In addition to parallel implementation of the algorithm, our focus is that the algorithm has faster convergence rate, since existing dual subgradient-based algorithms suffer from a slow convergence rate. Our proposed algorithms are based Nesterov’s smoothing technique and several fast gradient schemes. The explicit convergence rates of the proposed algorithms are obtained, which are superior to those obtained by subgradient-based algorithms. The proposed algorithms are applied to a real-pricing problem in smart grid and a network utility maximum problem. Dual decomposition methods often involve in finding the exact solution of an inner subproblem at each iteration. However, from a practical point of view, the subproblem is never solved exactly. Hence, we extend the proposed fast dual gradient-projection method to the inexact setting. Although the inner subproblem is solved only up to certain precision, we provide a complete analysis of computational complexity on the generated approximate solutions. Thus, our inexact version has the attractive computational advantage that the subproblem only needs to be solved with certain accuracy while still maintaining the same iteration complexity as the exact counterpart.Doctor of Philosoph

    A Distributed Asynchronous Method of Multipliers for Constrained Nonconvex Optimization

    Get PDF
    This paper presents a fully asynchronous and distributed approach for tackling optimization problems in which both the objective function and the constraints may be nonconvex. In the considered network setting each node is active upon triggering of a local timer and has access only to a portion of the objective function and to a subset of the constraints. In the proposed technique, based on the method of multipliers, each node performs, when it wakes up, either a descent step on a local augmented Lagrangian or an ascent step on the local multiplier vector. Nodes realize when to switch from the descent step to the ascent one through an asynchronous distributed logic-AND, which detects when all the nodes have reached a predefined tolerance in the minimization of the augmented Lagrangian. It is shown that the resulting distributed algorithm is equivalent to a block coordinate descent for the minimization of the global augmented Lagrangian. This allows one to extend the properties of the centralized method of multipliers to the considered distributed framework. Two application examples are presented to validate the proposed approach: a distributed source localization problem and the parameter estimation of a neural network.Comment: arXiv admin note: substantial text overlap with arXiv:1803.0648

    Quantized Consensus ADMM for Multi-Agent Distributed Optimization

    Get PDF
    Multi-agent distributed optimization over a network minimizes a global objective formed by a sum of local convex functions using only local computation and communication. We develop and analyze a quantized distributed algorithm based on the alternating direction method of multipliers (ADMM) when inter-agent communications are subject to finite capacity and other practical constraints. While existing quantized ADMM approaches only work for quadratic local objectives, the proposed algorithm can deal with more general objective functions (possibly non-smooth) including the LASSO. Under certain convexity assumptions, our algorithm converges to a consensus within log1+ηΩ\log_{1+\eta}\Omega iterations, where η>0\eta>0 depends on the local objectives and the network topology, and Ω\Omega is a polynomial determined by the quantization resolution, the distance between initial and optimal variable values, the local objective functions and the network topology. A tight upper bound on the consensus error is also obtained which does not depend on the size of the network.Comment: 30 pages, 4 figures; to be submitted to IEEE Trans. Signal Processing. arXiv admin note: text overlap with arXiv:1307.5561 by other author
    corecore