219 research outputs found
A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima
We introduce Bella, a locally superlinearly convergent Bregman forward
backward splitting method for minimizing the sum of two nonconvex functions,
one of which satisfying a relative smoothness condition and the other one
possibly nonsmooth. A key tool of our methodology is the Bregman
forward-backward envelope (BFBE), an exact and continuous penalty function with
favorable first- and second-order properties, and enjoying a nonlinear error
bound when the objective function satisfies a Lojasiewicz-type property. The
proposed algorithm is of linesearch type over the BFBE along candidate update
directions, and converges subsequentially to stationary points, globally under
a KL condition, and owing to the given nonlinear error bound can attain
superlinear convergence rates even when the limit point is a nonisolated
minimum, provided the directions are suitably selected
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension
Globally Convergent Coderivative-Based Generalized Newton Methods in Nonsmooth Optimization
This paper proposes and justifies two globally convergent Newton-type methods
to solve unconstrained and constrained problems of nonsmooth optimization by
using tools of variational analysis and generalized differentiation. Both
methods are coderivative-based and employ generalized Hessians (coderivatives
of subgradient mappings) associated with objective functions, which are either
of class , or are represented in the form of convex
composite optimization, where one of the terms may be extended-real-valued. The
proposed globally convergent algorithms are of two types. The first one extends
the damped Newton method and requires positive-definiteness of the generalized
Hessians for its well-posedness and efficient performance, while the other
algorithm is of {the regularized Newton type} being well-defined when the
generalized Hessians are merely positive-semidefinite. The obtained convergence
rates for both methods are at least linear, but become superlinear under the
semismooth property of subgradient mappings. Problems of convex composite
optimization are investigated with and without the strong convexity assumption
{on smooth parts} of objective functions by implementing the machinery of
forward-backward envelopes. Numerical experiments are conducted for Lasso
problems and for box constrained quadratic programs with providing performance
comparisons of the new algorithms and some other first-order and second-order
methods that are highly recognized in nonsmooth optimization.Comment: arXiv admin note: text overlap with arXiv:2101.1055
A Simple and Efficient Algorithm for Nonlinear Model Predictive Control
We present PANOC, a new algorithm for solving optimal control problems
arising in nonlinear model predictive control (NMPC). A usual approach to this
type of problems is sequential quadratic programming (SQP), which requires the
solution of a quadratic program at every iteration and, consequently, inner
iterative procedures. As a result, when the problem is ill-conditioned or the
prediction horizon is large, each outer iteration becomes computationally very
expensive. We propose a line-search algorithm that combines forward-backward
iterations (FB) and Newton-type steps over the recently introduced
forward-backward envelope (FBE), a continuous, real-valued, exact merit
function for the original problem. The curvature information of Newton-type
methods enables asymptotic superlinear rates under mild assumptions at the
limit point, and the proposed algorithm is based on very simple operations:
access to first-order information of the cost and dynamics and low-cost direct
linear algebra. No inner iterative procedure nor Hessian evaluation is
required, making our approach computationally simpler than SQP methods. The
low-memory requirements and simple implementation make our method particularly
suited for embedded NMPC applications
- β¦