99 research outputs found
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension
Global rates of convergence for nonconvex optimization on manifolds
We consider the minimization of a cost function on a manifold using
Riemannian gradient descent and Riemannian trust regions (RTR). We focus on
satisfying necessary optimality conditions within a tolerance .
Specifically, we show that, under Lipschitz-type assumptions on the pullbacks
of to the tangent spaces of , both of these algorithms produce points
with Riemannian gradient smaller than in
iterations. Furthermore, RTR returns a point where also the Riemannian
Hessian's least eigenvalue is larger than in
iterations. There are no assumptions on initialization.
The rates match their (sharp) unconstrained counterparts as a function of the
accuracy (up to constants) and hence are sharp in that sense.
These are the first deterministic results for global rates of convergence to
approximate first- and second-order Karush-Kuhn-Tucker points on manifolds.
They apply in particular for optimization constrained to compact submanifolds
of , under simpler assumptions.Comment: 33 pages, IMA Journal of Numerical Analysis, 201
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
- âŠ