3,391 research outputs found

    A Primal-Dual Parallel Method with O(1/ϵ)O(1/\epsilon) Convergence for Constrained Composite Convex Programs

    Full text link
    This paper considers large scale constrained convex (possibly composite and non-separable) programs, which are usually difficult to solve by interior point methods or other Newton-type methods due to the non-smoothness or the prohibitive computation and storage complexity for Hessians and matrix inversions. Instead, they are often solved by first order gradient based methods or decomposition based methods. The conventional primal-dual subgradient method, also known as the Arrow-Hurwicz-Uzawa subgradient method, is a low complexity algorithm with an O(1/ϵ2)O(1/\epsilon^2) convergence time. Recently, a new Lagrangian dual type algorithm with a faster O(1/ϵ)O(1/\epsilon) convergence time is proposed in Yu and Neely (2017). However, if the objective or constraint functions are not separable, each iteration of the Lagrangian dual type method in Yu and Neely (2017) requires to solve a unconstrained convex program, which can have huge complexity. This paper proposes a new primal-dual type algorithm with O(1/ϵ)O(1/\epsilon) convergence for general constrained convex programs. Each iteration of the new algorithm can be implemented in parallel with low complexity even when the original problem is composite and non-separable.Comment: 23 pages, 4 figures. arXiv admin note: text overlap with arXiv:1604.0221

    Stochastic Primal-Dual Coordinate Method for Nonlinear Convex Cone Programs

    Full text link
    Block coordinate descent (BCD) methods and their variants have been widely used in coping with large-scale nonconstrained optimization problems in many fields such as imaging processing, machine learning, compress sensing and so on. For problem with coupling constraints, Nonlinear convex cone programs (NCCP) are important problems with many practical applications, but these problems are hard to solve by using existing block coordinate type methods. This paper introduces a stochastic primal-dual coordinate (SPDC) method for solving large-scale NCCP. In this method, we randomly choose a block of variables based on the uniform distribution. The linearization and Bregman-like function (core function) to that randomly selected block allow us to get simple parallel primal-dual decomposition for NCCP. The sequence generated by our algorithm is proved almost surely converge to an optimal solution of primal problem. Two types of convergence rate with different probability (almost surely and expected) are also obtained. The probability complexity bound is also derived in this paper

    First-order methods for constrained convex programming based on linearized augmented Lagrangian function

    Full text link
    First-order methods have been popularly used for solving large-scale problems. However, many existing works only consider unconstrained problems or those with simple constraint. In this paper, we develop two first-order methods for constrained convex programs, for which the constraint set is represented by affine equations and smooth nonlinear inequalities. Both methods are based on the classic augmented Lagrangian function. They update the multipliers in the same way as the augmented Lagrangian method (ALM) but employ different primal variable updates. The first method, at each iteration, performs a single proximal gradient step to the primal variable, and the second method is a block update version of the first one. For the first method, we establish its global iterate convergence as well as global sublinear and local linear convergence, and for the second method, we show a global sublinear convergence result in expectation. Numerical experiments are carried out on the basis pursuit denoising and a convex quadratically constrained quadratic program to show the empirical performance of the proposed methods. Their numerical behaviors closely match the established theoretical results

    A Simple Parallel Algorithm with an O(1/t)O(1/t) Convergence Rate for General Convex Programs

    Full text link
    This paper considers convex programs with a general (possibly non-differentiable) convex objective function and Lipschitz continuous convex inequality constraint functions. A simple algorithm is developed and achieves an O(1/t)O(1/t) convergence rate. Similar to the classical dual subgradient algorithm and the ADMM algorithm, the new algorithm has a parallel implementation when the objective and constraint functions are separable. However, the new algorithm has a faster O(1/t)O(1/t) convergence rate compared with the best known O(1/t)O(1/\sqrt{t}) convergence rate for the dual subgradient algorithm with primal averaging. Further, it can solve convex programs with nonlinear constraints, which cannot be handled by the ADMM algorithm. The new algorithm is applied to a multipath network utility maximization problem and yields a decentralized flow control algorithm with the fast O(1/t)O(1/t) convergence rate.Comment: Published in SIAM Journal on Optimization, 2017. (This version also corrected a minor iteration index typo in the description of the ADMM algorithm at the top of page 4.

    FBstab: A Stabilized Semismooth Quadratic Programming Algorithm with Applications in Model Predictive Control

    Full text link
    This paper introduces the proximally stabilized Fischer-Burmeister method (FBstab); a new algorithm for convex quadratic programming that synergistically combines the proximal point algorithm with a primal-dual semismooth Newton-type method. FBstab is numerically robust, easy to warmstart, handles degenerate primal-dual solutions, detects infeasibility/unboundedness and requires only that the Hessian matrix be positive semidefinite. We outline the algorithm, provide convergence and convergence rate proofs, report some numerical results from model predictive control benchmarks, and also include experimental results. We show that FBstab is competitive with and often superior to, state of the art methods, has attractive scaling properties, and is especially promising for model predictive control applications

    Stochastic Primal-Dual Coordinate Method with Large Step Size for Composite Optimization with Composite Cone-constraints

    Full text link
    We introduce a stochastic coordinate extension of the first-order primal-dual method studied by Cohen and Zhu (1984) and Zhao and Zhu (2018) to solve Composite Optimization with Composite Cone-constraints (COCC). In this method, we randomly choose a block of variables based on the uniform distribution. The linearization and Bregman-like function (core function) to that randomly selected block allow us to get simple parallel primal-dual decomposition for COCC. We obtain almost surely convergence and O(1/t) expected convergence rate in this work. The high probability complexity bound is also derived in this paper.Comment: arXiv admin note: substantial text overlap with arXiv:1804.0080

    An Inexact Interior-Point Lagrangian Decomposition Algorithm with Inexact Oracles

    Full text link
    We develop a new inexact interior-point Lagrangian decomposition method to solve a wide range class of constrained composite convex optimization problems. Our method relies on four techniques: Lagrangian dual decomposition, self-concordant barrier smoothing, path-following, and proximal-Newton technique. It also allows one to approximately compute the solution of the primal subproblems (called the slave problems), which leads to inexact oracles (i.e., inexact gradients and Hessians) of the smoothed dual problem (called the master problem). The smoothed dual problem is nonsmooth, we propose to use an inexact proximal-Newton method to solve it. By appropriately controlling the inexact computation at both levels: the slave and master problems, we still estimate a polynomial-time iteration-complexity of our algorithm as in standard short-step interior-point methods. We also provide a strategy to recover primal solutions and establish complexity to achieve an approximate primal solution. We illustrate our method through two numerical examples on well-known models with both synthetic and real data and compare it with some existing state-of-the-art methods.Comment: 34 pages, 2 figures, and 1 tabl

    DuQuad: an inexact (augmented) dual first order algorithm for quadratic programming

    Full text link
    In this paper we present the solver DuQuad specialized for solving general convex quadratic problems arising in many engineering applications. When it is difficult to project on the primal feasible set, we use the (augmented) Lagrangian relaxation to handle the complicated constraints and then, we apply dual first order algorithms based on inexact dual gradient information for solving the corresponding dual problem. The iteration complexity analysis is based on two types of approximate primal solutions: the primal last iterate and an average of primal iterates. We provide computational complexity estimates on the primal suboptimality and feasibility violation of the generated approximate primal solutions. Then, these algorithms are implemented in the programming language C in DuQuad, and optimized for low iteration complexity and low memory footprint. DuQuad has a dynamic Matlab interface which make the process of testing, comparing, and analyzing the algorithms simple. The algorithms are implemented using only basic arithmetic and logical operations and are suitable to run on low cost hardware. It is shown that if an approximate solution is sufficient for a given application, there exists problems where some of the implemented algorithms obtain the solution faster than state-of-the-art commercial solvers.Comment: 25 pages, 6 figure

    Asynchronous parallel primal-dual block coordinate update methods for affinely constrained convex programs

    Full text link
    Recent several years have witnessed the surge of asynchronous (async-) parallel computing methods due to the extremely big data involved in many modern applications and also the advancement of multi-core machines and computer clusters. In optimization, most works about async-parallel methods are on unconstrained problems or those with block separable constraints. In this paper, we propose an async-parallel method based on block coordinate update (BCU) for solving convex problems with nonseparable linear constraint. Running on a single node, the method becomes a novel randomized primal-dual BCU with adaptive stepsize for multi-block affinely constrained problems. For these problems, Gauss-Seidel cyclic primal-dual BCU needs strong convexity to have convergence. On the contrary, merely assuming convexity, we show that the objective value sequence generated by the proposed algorithm converges in probability to the optimal value and also the constraint residual to zero. In addition, we establish an ergodic O(1/k)O(1/k) convergence result, where kk is the number of iterations. Numerical experiments are performed to demonstrate the efficiency of the proposed method and significantly better speed-up performance than its sync-parallel counterpart

    Dual Smoothing and Level Set Techniques for Variational Matrix Decomposition

    Full text link
    We focus on the robust principal component analysis (RPCA) problem, and review a range of old and new convex formulations for the problem and its variants. We then review dual smoothing and level set techniques in convex optimization, present several novel theoretical results, and apply the techniques on the RPCA problem. In the final sections, we show a range of numerical experiments for simulated and real-world problems.Comment: 38 pages, 10 figures. arXiv admin note: text overlap with arXiv:1406.108
    corecore