5 research outputs found
Newton acceleration on manifolds identified by proximal-gradient methods
Proximal methods are known to identify the underlying substructure of
nonsmooth optimization problems. Even more, in many interesting situations, the
output of a proximity operator comes with its structure at no additional cost,
and convergence is improved once it matches the structure of a minimizer.
However, it is impossible in general to know whether the current structure is
final or not; such highly valuable information has to be exploited adaptively.
To do so, we place ourselves in the case where a proximal gradient method can
identify manifolds of differentiability of the nonsmooth objective. Leveraging
this manifold identification, we show that Riemannian Newton-like methods can
be intertwined with the proximal gradient steps to drastically boost the
convergence. We prove the superlinear convergence of the algorithm when solving
some nondegenerated nonsmooth nonconvex optimization problems. We provide
numerical illustrations on optimization problems regularized by -norm
or trace-norm
A Riemannian BFGS Method Without Differentiated Retraction for Nonconvex Optimization Problems
In this paper, a Riemannian BFGS method for minimizing a smooth function on a Riemannian manifold is defined, based on a Riemannian generalization of a cautious update and a weak line search condition. It is proven that the Riemannian BFGS method converges (i) globally to stationary points without assuming the objective function to be convex and (ii) superlinearly to a nondegenerate minimizer. Using the weak line search condition allows to completely avoid the information of differentiated retraction. The joint matrix diagonalization problem is chosen to demonstrate the performance of the algorithms with various parameters, line search conditions and pairs of retraction and vector transport. A preliminary version can be found in [HAG16]