213 research outputs found
Accelerating the LSTRS Algorithm
In a recent paper [Rojas, Santos, Sorensen: ACM ToMS 34 (2008), Article 11] an efficient method for solvingthe Large-Scale Trust-Region Subproblem was suggested which is based on recasting it in terms of a parameter dependent eigenvalue problem and adjusting the parameter iteratively. The essential work at each iteration is the solution of an eigenvalue problem for the smallest eigenvalue of the Hessian matrix (or two smallest eigenvalues in the potential hard case) and associated eigenvector(s). Replacing the implicitly restarted Lanczos method in the original paper with the Nonlinear Arnoldi method makes it possible to recycle most of the work from previous iterations which can substantially accelerate LSTRS
Riemannian Adaptive Regularized Newton Methods with H\"older Continuous Hessians
This paper presents strong worst-case iteration and operation complexity
guarantees for Riemannian adaptive regularized Newton methods, a unified
framework encompassing both Riemannian adaptive regularization (RAR) methods
and Riemannian trust region (RTR) methods. We comprehensively characterize the
sources of approximation in second-order manifold optimization methods: the
objective function's smoothness, retraction's smoothness, and subproblem
solver's inexactness. Specifically, for a function with a -H\"older
continuous Hessian, when equipped with a retraction featuring a -H\"older
continuous differential and a -inexact subproblem solver, both RTR and
RAR with regularization (where )
locate an -approximate second-order
stationary point within at most
iterations and at most
Hessian-vector products. These complexity results are novel and sharp, and
reduce to an iteration complexity of and an operation
complexity of when
- …