2 research outputs found
Regularization of Limited Memory Quasi-Newton Methods for Large-Scale Nonconvex Minimization
This paper deals with regularized Newton methods, a flexible class of
unconstrained optimization algorithms that is competitive with line search and
trust region methods and potentially combines attractive elements of both. The
particular focus is on combining regularization with limited memory
quasi-Newton methods by exploiting the special structure of limited memory
algorithms. Global convergence of regularization methods is shown under mild
assumptions and the details of regularized limited memory quasi-Newton updates
are discussed including their compact representations.
Numerical results using all large-scale test problems from the CUTEst
collection indicate that our regularized version of L-BFGS is competitive with
state-of-the-art line search and trust-region L-BFGS algorithms and previous
attempts at combining L-BFGS with regularization, while potentially
outperforming some of them, especially when nonmonotonicity is involved.Comment: 23 pages, 4 figure
A new Newton method for convex optimization problems with singular Hessian matrices
In this paper, we propose a new Newton method for minimizing convex optimization problems with singular Hessian matrices including the special case that the Hessian matrix of the objective function is singular at any iteration point. The new method we proposed has some updates in the regularized parameter and the search direction. The step size of our method can be obtained by using Armijo backtracking line search. We also prove that the new method has global convergence. Some numerical experimental results show that the new method performs well for solving convex optimization problems whose Hessian matrices of the objective functions are singular everywhere