24 research outputs found
A Family of variable metric proximal methods
An updated version of this paper has appeared in Mathematical Programming, no 68 (1995), pp. 15-47, DOI 10.1007/BF01585756Nous considérons des méthodes conceptuelles d'optimisation combinant deux idéees : La régularisation de Moreau-Yosida en analyse convexe et les approximations quasi-Newtoniennes des fonctions régulières
Variable Metric Forward-Backward Splitting with Applications to Monotone Inclusions in Duality
We propose a variable metric forward-backward splitting algorithm and prove
its convergence in real Hilbert spaces. We then use this framework to derive
primal-dual splitting algorithms for solving various classes of monotone
inclusions in duality. Some of these algorithms are new even when specialized
to the fixed metric case. Various applications are discussed
An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming
Powerful interior-point methods (IPM) based commercial solvers, such as
Gurobi and Mosek, have been hugely successful in solving large-scale linear
programming (LP) problems. The high efficiency of these solvers depends
critically on the sparsity of the problem data and advanced matrix
factorization techniques. For a large scale LP problem with data matrix
that is dense (possibly structured) or whose corresponding normal matrix
has a dense Cholesky factor (even with re-ordering), these solvers may require
excessive computational cost and/or extremely heavy memory usage in each
interior-point iteration. Unfortunately, the natural remedy, i.e., the use of
iterative methods based IPM solvers, although can avoid the explicit
computation of the coefficient matrix and its factorization, is not practically
viable due to the inherent extreme ill-conditioning of the large scale normal
equation arising in each interior-point iteration. To provide a better
alternative choice for solving large scale LPs with dense data or requiring
expensive factorization of its normal equation, we propose a semismooth Newton
based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different
from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can
efficiently be used to solve simpler yet better conditioned semismooth Newton
linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic
superlinear convergence but is also proven to enjoy a finite termination
property. Numerical comparisons with Gurobi have demonstrated encouraging
potential of {\sc Snipal} for handling large-scale LP problems where the
constraint matrix has a dense representation or has a dense
factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920
characters", the abstract appearing here is slightly shorter than that in the
PDF fil
A General Approach to Convergence Properties of Some Methods for Nonsmooth Convex Optimization
Based on the notion of the ε -subgradient, we present a unified technique to establish convergence properties of several methods for nonsmooth convex minimization problems. Starting from the technical results, we obtain the global convergence of: (i) the variable metric proximal methods presented by Bonnans, Gilbert, Lemaréchal, and Sagastizábal, (ii) some algorithms proposed by Correa and Lemaréchal, and (iii) the proximal point algorithm given by Rockafellar. In particular, we prove that the Rockafellar—Todd phenomenon does not occur for each of the above mentioned methods. Moreover, we explore the convergence rate of { ||x k || } and {f(x k ) } when {x k } is unbounded and {f(x k ) } is bounded for the non-smooth minimization methods (i), (ii), and (iii).Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/42374/1/245-38-2-141_38n2p141.pd
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension