13 research outputs found

    On Solving L-SR1 Trust-Region Subproblems

    Full text link
    In this article, we consider solvers for large-scale trust-region subproblems when the quadratic model is defined by a limited-memory symmetric rank-one (L-SR1) quasi-Newton matrix. We propose a solver that exploits the compact representation of L-SR1 matrices. Our approach makes use of both an orthonormal basis for the eigenspace of the L-SR1 matrix and the Sherman-Morrison-Woodbury formula to compute global solutions to trust-region subproblems. To compute the optimal Lagrange multiplier for the trust-region constraint, we use Newton's method with a judicious initial guess that does not require safeguarding. A crucial property of this solver is that it is able to compute high-accuracy solutions even in the so-called hard case. Additionally, the optimal solution is determined directly by formula, not iteratively. Numerical experiments demonstrate the effectiveness of this solver.Comment: 2015-0

    Accelerating the LSTRS Algorithm

    Get PDF
    In a recent paper [Rojas, Santos, Sorensen: ACM ToMS 34 (2008), Article 11] an efficient method for solvingthe Large-Scale Trust-Region Subproblem was suggested which is based on recasting it in terms of a parameter dependent eigenvalue problem and adjusting the parameter iteratively. The essential work at each iteration is the solution of an eigenvalue problem for the smallest eigenvalue of the Hessian matrix (or two smallest eigenvalues in the potential hard case) and associated eigenvector(s). Replacing the implicitly restarted Lanczos method in the original paper with the Nonlinear Arnoldi method makes it possible to recycle most of the work from previous iterations which can substantially accelerate LSTRS

    Accelerated Line-search and Trust-region Methods

    Full text link

    Updating the regularization parameter in the adaptive cubic regularization algorithm

    Get PDF
    The adaptive cubic regularization method (Cartis et al. in Math. Program. Ser. A 127(2):245–295, 2011; Math. Program. Ser. A. 130(2):295–319, 2011) has been recently proposed for solving unconstrained minimization problems. At each iteration of this method, the objective function is replaced by a cubic approximation which comprises an adaptive regularization parameter whose role is related to the local Lipschitz constant of the objective’s Hessian. We present new updating strategies for this parameter based on interpolation techniques, which improve the overall numerical performance of the algorithm. Numerical experiments on large nonlinear least-squares problems are provided

    Efficient Trust Region Subproblem Algorithms

    Get PDF
    The Trust Region Subproblem (TRS) is the problem of minimizing a quadratic (possibly non-convex) function over a sphere. It is the main step of the trust region method for unconstrained optimization problems. Two cases may cause numerical difficulties in solving the TRS, i.e., (i) the so-called hard case and (ii) having a large trust region radius. In this thesis we give the optimality characteristics of the TRS and review the major current algorithms. Then we introduce some techniques to solve the TRS efficiently for the two difficult cases. A shift and deflation technique avoids the hard case; and a scaling can adjust the value of the trust region radius. In addition, we illustrate other improvements for the TRS algorithm, including: rotation, approximate eigenvalue calculations, and inverse polynomial interpolation. We also introduce a warm start approach and include a new treatment for the hard case for the trust region method. Sensitivity analysis is provided to show that the optimal objective value for the TRS is stable with respect to the trust region radius in both the easy and hard cases. Finally, numerical experiments are provided to show the performance of all the improvements

    Global convergence of SSM for minimizing a quadratic over a sphere

    No full text
    Abstract. In an earlier paper [Minimizing a quadratic over a sphere, SIAM J. Optim., 12 (2001), 188–208], we presented the sequential subspace method (SSM) for minimizing a quadratic over a sphere. This method generates approximations to a minimizer by carrying out the minimization over a sequence of subspaces that are adjusted after each iterate is computed. We showed in this earlier paper that when the subspace contains a vector obtained by applying one step of Newton’s method to the first-order optimality system, SSM is locally, quadratically convergent, even when the original problem is degenerate with multiple solutions and with a singular Jacobian in the optimality system. In this paper, we prove (nonlocal) convergence of SSM to a global minimizer whenever each SSM subspace contains the following three vectors: (i) the current iterate, (ii) the gradient of the cost function evaluated at the current iterate, and (iii) an eigenvector associated with the smallest eigenvalue of the cost function Hessian. For nondegenerate problems, the convergence rate is at least linear when vectors (i)–(iii) are included in the SSM subspace. 1
    corecore