2,964 research outputs found
A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima
We introduce Bella, a locally superlinearly convergent Bregman forward
backward splitting method for minimizing the sum of two nonconvex functions,
one of which satisfying a relative smoothness condition and the other one
possibly nonsmooth. A key tool of our methodology is the Bregman
forward-backward envelope (BFBE), an exact and continuous penalty function with
favorable first- and second-order properties, and enjoying a nonlinear error
bound when the objective function satisfies a Lojasiewicz-type property. The
proposed algorithm is of linesearch type over the BFBE along candidate update
directions, and converges subsequentially to stationary points, globally under
a KL condition, and owing to the given nonlinear error bound can attain
superlinear convergence rates even when the limit point is a nonisolated
minimum, provided the directions are suitably selected
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension
On Convex Envelopes and Regularization of Non-Convex Functionals without moving Global Minima
We provide theory for the computation of convex envelopes of non-convex
functionals including an l2-term, and use these to suggest a method for
regularizing a more general set of problems. The applications are particularly
aimed at compressed sensing and low rank recovery problems but the theory
relies on results which potentially could be useful also for other types of
non-convex problems. For optimization problems where the l2-term contains a
singular matrix we prove that the regularizations never move the global minima.
This result in turn relies on a theorem concerning the structure of convex
envelopes which is interesting in its own right. It says that at any point
where the convex envelope does not touch the non-convex functional we
necessarily have a direction in which the convex envelope is affine.Comment: arXiv admin note: text overlap with arXiv:1609.0937
A new envelope function for nonsmooth DC optimization
Difference-of-convex (DC) optimization problems are shown to be equivalent to
the minimization of a Lipschitz-differentiable "envelope". A gradient method on
this surrogate function yields a novel (sub)gradient-free proximal algorithm
which is inherently parallelizable and can handle fully nonsmooth formulations.
Newton-type methods such as L-BFGS are directly applicable with a classical
linesearch. Our analysis reveals a deep kinship between the novel DC envelope
and the forward-backward envelope, the former being a smooth and
convexity-preserving nonlinear reparametrization of the latter
Lagrange optimality system for a class of nonsmooth convex optimization
In this paper, we revisit the augmented Lagrangian method for a class of
nonsmooth convex optimization. We present the Lagrange optimality system of the
augmented Lagrangian associated with the problems, and establish its
connections with the standard optimality condition and the saddle point
condition of the augmented Lagrangian, which provides a powerful tool for
developing numerical algorithms. We apply a linear Newton method to the
Lagrange optimality system to obtain a novel algorithm applicable to a variety
of nonsmooth convex optimization problems arising in practical applications.
Under suitable conditions, we prove the nonsingularity of the Newton system and
the local convergence of the algorithm.Comment: 19 page
On starting and stopping criteria for nested primal-dual iterations
The importance of an adequate inner loop starting point (as opposed to a
sufficient inner loop stopping rule) is discussed in the context of a numerical
optimization algorithm consisting of nested primal-dual proximal-gradient
iterations. While the number of inner iterations is fixed in advance,
convergence of the whole algorithm is still guaranteed by virtue of a
warm-start strategy for the inner loop, showing that inner loop "starting
rules" can be just as effective as "stopping rules" for guaranteeing
convergence. The algorithm itself is applicable to the numerical solution of
convex optimization problems defined by the sum of a differentiable term and
two possibly non-differentiable terms. One of the latter terms should take the
form of the composition of a linear map and a proximable function, while the
differentiable term needs an accessible gradient. The algorithm reduces to the
classical proximal gradient algorithm in certain special cases and it also
generalizes other existing algorithms. In addition, under some conditions of
strong convexity, we show a linear rate of convergence.Comment: 18 pages, no figure
- …