This paper deals with regularized Newton methods, a flexible class of
unconstrained optimization algorithms that is competitive with line search and
trust region methods and potentially combines attractive elements of both. The
particular focus is on combining regularization with limited memory
quasi-Newton methods by exploiting the special structure of limited memory
algorithms. Global convergence of regularization methods is shown under mild
assumptions and the details of regularized limited memory quasi-Newton updates
are discussed including their compact representations.
Numerical results using all large-scale test problems from the CUTEst
collection indicate that our regularized version of L-BFGS is competitive with
state-of-the-art line search and trust-region L-BFGS algorithms and previous
attempts at combining L-BFGS with regularization, while potentially
outperforming some of them, especially when nonmonotonicity is involved.Comment: 23 pages, 4 figure