5 research outputs found

    Iteration complexity analysis of dual first order methods for conic convex programming

    Full text link
    In this paper we provide a detailed analysis of the iteration complexity of dual first order methods for solving conic convex problems. When it is difficult to project on the primal feasible set described by convex constraints, we use the Lagrangian relaxation to handle the complicated constraints and then, we apply dual first order algorithms for solving the corresponding dual problem. We give convergence analysis for dual first order algorithms (dual gradient and fast gradient algorithms): we provide sublinear or linear estimates on the primal suboptimality and feasibility violation of the generated approximate primal solutions. Our analysis relies on the Lipschitz property of the gradient of the dual function or an error bound property of the dual. Furthermore, the iteration complexity analysis is based on two types of approximate primal solutions: the last primal iterate or an average primal sequence.Comment: 37 pages, 6 figure

    Augmented Lagrangian Optimization under Fixed-Point Arithmetic

    Full text link
    In this paper, we propose an inexact Augmented Lagrangian Method (ALM) for the optimization of convex and nonsmooth objective functions subject to linear equality constraints and box constraints where errors are due to fixed-point data. To prevent data overflow we also introduce a projection operation in the multiplier update. We analyze theoretically the proposed algorithm and provide convergence rate results and bounds on the accuracy of the optimal solution. Since iterative methods are often needed to solve the primal subproblem in ALM, we also propose an early stopping criterion that is simple to implement on embedded platforms, can be used for problems that are not strongly convex, and guarantees the precision of the primal update. To the best of our knowledge, this is the first fixed-point ALM that can handle non-smooth problems, data overflow, and can efficiently and systematically utilize iterative solvers in the primal update. Numerical simulation studies on a utility maximization problem are presented that illustrate the proposed method

    On the non-ergodic convergence rate of an inexact augmented Lagrangian framework for composite convex programming

    Full text link
    In this paper, we consider the linearly constrained composite convex optimization problem, whose objective is a sum of a smooth function and a possibly nonsmooth function. We propose an inexact augmented Lagrangian (IAL) framework for solving the problem. The stopping criterion used in solving the augmented Lagrangian (AL) subproblem in the proposed IAL framework is weaker and potentially much easier to check than the one used in most of the existing IAL frameworks/methods. We analyze the global convergence and the non-ergodic convergence rate of the proposed IAL framework.Comment: accepted in Mathematics of Operations Research. arXiv admin note: text overlap with arXiv:1507.0762

    On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent

    Full text link
    Dual first-order methods are essential techniques for large-scale constrained convex optimization. However, when recovering the primal solutions, we need T(ϵ2)T(\epsilon^{-2}) iterations to achieve an ϵ\epsilon-optimal primal solution when we apply an algorithm to the non-strongly convex dual problem with T(ϵ1)T(\epsilon^{-1}) iterations to achieve an ϵ\epsilon-optimal dual solution, where T(x)T(x) can be xx or x\sqrt{x}. In this paper, we prove that the iteration complexity of the primal solutions and dual solutions have the same O(1ϵ)O\left(\frac{1}{\sqrt{\epsilon}}\right) order of magnitude for the accelerated randomized dual coordinate ascent. When the dual function further satisfies the quadratic functional growth condition, by restarting the algorithm at any period, we establish the linear iteration complexity for both the primal solutions and dual solutions even if the condition number is unknown. When applied to the regularized empirical risk minimization problem, we prove the iteration complexity of O(nlogn+nϵ)O\left(n\log n+\sqrt{\frac{n}{\epsilon}}\right) in both primal space and dual space, where nn is the number of samples. Our result takes out the (log1ϵ)\left(\log \frac{1}{\epsilon}\right) factor compared with the methods based on smoothing/regularization or Catalyst reduction. As far as we know, this is the first time that the optimal O(nϵ)O\left(\sqrt{\frac{n}{\epsilon}}\right) iteration complexity in the primal space is established for the dual coordinate ascent based stochastic algorithms. We also establish the accelerated linear complexity for some problems with nonsmooth loss, i.e., the least absolute deviation and SVM

    Generalizing the optimized gradient method for smooth convex minimization

    Full text link
    This paper generalizes the optimized gradient method (OGM) that achieves the optimal worst-case cost function bound of first-order methods for smooth convex minimization. Specifically, this paper studies a generalized formulation of OGM and analyzes its worst-case rates in terms of both the function value and the norm of the function gradient. This paper also develops a new algorithm called OGM-OG that is in the generalized family of OGM and that has the best known analytical worst-case bound with rate O(1/N1.5)O(1/N^{1.5}) on the decrease of the gradient norm among fixed-step first-order methods. This paper also proves that Nesterov's fast gradient method has an O(1/N1.5)O(1/N^{1.5}) worst-case gradient norm rate but with constant larger than OGM-OG. The proof is based on the worst-case analysis called Performance Estimation Problem
    corecore