1,324 research outputs found

    Optimizing condition numbers

    Get PDF
    In this paper we study the problem of minimizing condition numbers over a compact convex subset of the cone of symmetric positive semidefinite n×nn\times n matrices. We show that the condition number is a Clarke regular strongly pseudoconvex function. We prove that a global solution of the problem can be approximated by an exact or an inexact solution of a nonsmooth convex program. This asymptotic analysis provides a valuable tool for designing an implementable algorithm for solving the problem of minimizing condition numbers

    Lagrange optimality system for a class of nonsmooth convex optimization

    Get PDF
    In this paper, we revisit the augmented Lagrangian method for a class of nonsmooth convex optimization. We present the Lagrange optimality system of the augmented Lagrangian associated with the problems, and establish its connections with the standard optimality condition and the saddle point condition of the augmented Lagrangian, which provides a powerful tool for developing numerical algorithms. We apply a linear Newton method to the Lagrange optimality system to obtain a novel algorithm applicable to a variety of nonsmooth convex optimization problems arising in practical applications. Under suitable conditions, we prove the nonsingularity of the Newton system and the local convergence of the algorithm.Comment: 19 page

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension

    On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence

    Get PDF
    We introduce a framework for quasi-Newton forward--backward splitting algorithms (proximal quasi-Newton methods) with a metric induced by diagonal ±\pm rank-rr symmetric positive definite matrices. This special type of metric allows for a highly efficient evaluation of the proximal mapping. The key to this efficiency is a general proximal calculus in the new metric. By using duality, formulas are derived that relate the proximal mapping in a rank-rr modified metric to the original metric. We also describe efficient implementations of the proximity calculation for a large class of functions; the implementations exploit the piece-wise linear nature of the dual problem. Then, we apply these results to acceleration of composite convex minimization problems, which leads to elegant quasi-Newton methods for which we prove convergence. The algorithm is tested on several numerical examples and compared to a comprehensive list of alternatives in the literature. Our quasi-Newton splitting algorithm with the prescribed metric compares favorably against state-of-the-art. The algorithm has extensive applications including signal processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115
    corecore