9 research outputs found
Nonconvex optimization using negative curvature within a modified linesearch
This paper describes a new algorithm for the solution of nonconvex unconstrained optimization problems, with the
property of converging to points satisfying second-order necessary optimality conditions. The algorithm is based on a procedure
which, from two descent directions, a Newton-type direction and a direction of negative curvature, selects in each
iteration the linesearch model best adapted to the properties of these directions. The paper also presents results of numerical
experiments that illustrate its practical efficiency.Publicad
An augmented Lagrangian interior-point method using directions of negative curvature
The original publication is available at www.springerlink.comWe describe an efficient implementation of an interior-point algorithm for non-convex problems
that uses directions of negative curvature. These directions should ensure convergence to second-order KKT
points and improve the computational efficiency of the procedure. Some relevant aspects of the implementation
are the strategy to combine a direction of negative curvature and a modified Newton direction, and
the conditions to ensure feasibility of the iterates with respect to the simple bounds. The use of multivariate
barrier and penalty parameters is also discussed, as well as the update rules for these parameters.We analyze
the convergence of the procedure; both the linesearch and the update rule for the barrier parameter behave
appropriately. As the main goal of the paper is the practical usage of negative curvature, a set of numerical
results on small test problems is presented. Based on these results, the relevance of using directions of negative
curvature is discussed.Research supported by Spanish MEC grant TIC2000-1750-C06-04; Research supported by Spanish MEC grant BEC2000-0167Publicad
Strong Metric (Sub)regularity of KKT Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization
This work concerns the local convergence theory of Newton and quasi-Newton
methods for convex-composite optimization: minimize f(x):=h(c(x)), where h is
an infinite-valued proper convex function and c is C^2-smooth. We focus on the
case where h is infinite-valued piecewise linear-quadratic and convex. Such
problems include nonlinear programming, mini-max optimization, estimation of
nonlinear dynamics with non-Gaussian noise as well as many modern approaches to
large-scale data analysis and machine learning. Our approach embeds the
optimality conditions for convex-composite optimization problems into a
generalized equation. We establish conditions for strong metric subregularity
and strong metric regularity of the corresponding set-valued mappings. This
allows us to extend classical convergence of Newton and quasi-Newton methods to
the broader class of non-finite valued piecewise linear-quadratic
convex-composite optimization problems. In particular we establish local
quadratic convergence of the Newton method under conditions that parallel those
in nonlinear programming when h is non-finite valued piecewise linear
Optimality conditions and a smoothing trust region newton method for nonlipschitz optimization
2013-2014 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe
A proximal method for composite minimization
Abstract. We consider minimization of functions that are compositions of prox-regular functions with smooth vector functions. A wide variety of important optimization problems can be formulated in this way. We describe a subproblem constructed from a linearized approximation to the objective and a regularization term, investigating the properties of local solutions of this subproblem and showing that they eventually identify a manifold containing the solution of the original problem. We propose an algorithmic framework based on this subproblem and prove a global convergence result