341 research outputs found
A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima
We introduce Bella, a locally superlinearly convergent Bregman forward
backward splitting method for minimizing the sum of two nonconvex functions,
one of which satisfying a relative smoothness condition and the other one
possibly nonsmooth. A key tool of our methodology is the Bregman
forward-backward envelope (BFBE), an exact and continuous penalty function with
favorable first- and second-order properties, and enjoying a nonlinear error
bound when the objective function satisfies a Lojasiewicz-type property. The
proposed algorithm is of linesearch type over the BFBE along candidate update
directions, and converges subsequentially to stationary points, globally under
a KL condition, and owing to the given nonlinear error bound can attain
superlinear convergence rates even when the limit point is a nonisolated
minimum, provided the directions are suitably selected
A Simple and Efficient Algorithm for Nonlinear Model Predictive Control
We present PANOC, a new algorithm for solving optimal control problems
arising in nonlinear model predictive control (NMPC). A usual approach to this
type of problems is sequential quadratic programming (SQP), which requires the
solution of a quadratic program at every iteration and, consequently, inner
iterative procedures. As a result, when the problem is ill-conditioned or the
prediction horizon is large, each outer iteration becomes computationally very
expensive. We propose a line-search algorithm that combines forward-backward
iterations (FB) and Newton-type steps over the recently introduced
forward-backward envelope (FBE), a continuous, real-valued, exact merit
function for the original problem. The curvature information of Newton-type
methods enables asymptotic superlinear rates under mild assumptions at the
limit point, and the proposed algorithm is based on very simple operations:
access to first-order information of the cost and dynamics and low-cost direct
linear algebra. No inner iterative procedure nor Hessian evaluation is
required, making our approach computationally simpler than SQP methods. The
low-memory requirements and simple implementation make our method particularly
suited for embedded NMPC applications
On the Local and Global Convergence of a Reduced Quasi-Newton Method
In optimization in R^n with m nonlinear equality constraints, we study the local convergence of reduced quasi-Newton methods, in which the updated matrix is of order n-m. In particular, we give necessary and sufficient conditions for q-superlinear convergence (in one step). We introduce a device to globalize the local algorithm which consists in determining a step on an arc in order to decrease an exact penalty function. We give conditions so that asymptotically the step will be equal to one
The Lack of Positive Definiteness in the Hessian in Constrained Optimization
The use of the DFP or the BFGS secant updates requires the Hessian at the solution to be positive definite. The second order sufficiency conditions insure the positive definiteness only in a subspace of R^n. Conditions are given so we can safely update with either update. A new class of algorithms is proposed which generate a sequence {xk} converging 2-step q-superlinearly. We also propose two specific algorithms: One converges q-superlinearly if the Hessian is positive definite in R^n and converges 2-step q-superlinearly if the Hessian is positive definite only in a subspace; the second one generates a sequence converging 1-step q-superlinearly. While the former costs one extra gradient evaluation, the latter costs one extra gradient evaluation and one extra function evaluation on the constraints
Newton-type Alternating Minimization Algorithm for Convex Optimization
We propose NAMA (Newton-type Alternating Minimization Algorithm) for solving
structured nonsmooth convex optimization problems where the sum of two
functions is to be minimized, one being strongly convex and the other composed
with a linear mapping. The proposed algorithm is a line-search method over a
continuous, real-valued, exact penalty function for the corresponding dual
problem, which is computed by evaluating the augmented Lagrangian at the primal
points obtained by alternating minimizations. As a consequence, NAMA relies on
exactly the same computations as the classical alternating minimization
algorithm (AMA), also known as the dual proximal gradient method. Under
standard assumptions the proposed algorithm possesses strong convergence
properties, while under mild additional assumptions the asymptotic convergence
is superlinear, provided that the search directions are chosen according to
quasi-Newton formulas. Due to its simplicity, the proposed method is well
suited for embedded applications and large-scale problems. Experiments show
that using limited-memory directions in NAMA greatly improves the convergence
speed over AMA and its accelerated variant
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension
- …