2,623 research outputs found

    A Class of Diagonally Preconditioned Limited Memory Quasi-Newton Methods for Large-Scale Unconstrained Optimization

    Get PDF
    The focus of this thesis is to diagonally precondition on the limited memory quasi-Newton method for large scale unconstrained optimization problem. Particularly, the centre of discussion is on diagonally preconditioned limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method. L-BFGS method has been widely used in large scale unconstrained optimization due to its effectiveness. However, a major drawback of the L-BFGS method is that it can be very slow on certain type of problems. Scaling and preconditioning have been used to boost the performance of the L-BFGS method. In this study, a class of diagonally preconditioned L-BFGS method will be proposed. Contrary to the standard L-BFGS method where its initial inverse Hessian approximation is the identity matrix, a class of diagonal preconditioners has been derived based upon the weak-quasi-Newton relation with an additional parameter. Choosing different parameters leads the research to some well-known diagonal updating formulae which enable the R-linear convergent for the L-BFGS method. Numerical experiments were performed on a set of large scale unconstrained minimization problem to examine the impact of each choice of parameter. The computational results suggest that the proposed diagonally preconditioned L-BFGS methods outperform the standard L-BFGS method without any preconditioning. Finally, we discuss on the impact of the diagonal preconditioners on the L-BFGS method as compared to the standard L-BFGS method in terms of the number of iterations, the number of function/gradient evaluations and the CPU time in second

    Preconditioning on subspace quasi-Newton method for large scale unconstrained optimization

    Get PDF
    Recently, subspace quasi-Newton (SQN) method has been widely used in solving large scale unconstrained optimization. Besides constructing sub-problems in low dimensions so that the storage requirement as well as computational cost can be reduced, it can also be implemented extremely fast when the objective function is a combination of computationally cheap non-linear functions. However, the main deficiency of SQN method is that it can be very slow on certain type of non-linear problem. Hence, a preconditioner which is computationally cheap and is a good approximation to the actual Hessian is constructed to speed up the convergence of the quasi-Newton methods since the evaluation of the actual Hessian is considered as impractical and costly. For this purpose, a diagonal updating matrix has been derived to replace the identity matrix in approximating the initial inverse Hessian. The numerical results show that the preconditioned SQN method performs better than the standard SQN method that without preconditioning

    Limited-memory BFGS Systems with Diagonal Updates

    Get PDF
    In this paper, we investigate a formula to solve systems of the form (B + {\sigma}I)x = y, where B is a limited-memory BFGS quasi-Newton matrix and {\sigma} is a positive constant. These types of systems arise naturally in large-scale optimization such as trust-region methods as well as doubly-augmented Lagrangian methods. We show that provided a simple condition holds on B_0 and \sigma, the system (B + \sigma I)x = y can be solved via a recursion formula that requies only vector inner products. This formula has complexity M^2n, where M is the number of L-BFGS updates and n >> M is the dimension of x

    A quasi-Newton proximal splitting method

    Get PDF
    A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration of convex minimization problems, and leads to an elegant quasi-Newton method. The optimization method compares favorably against state-of-the-art alternatives. The algorithm has extensive applications including signal processing, sparse recovery and machine learning and classification

    On Solving L-SR1 Trust-Region Subproblems

    Full text link
    In this article, we consider solvers for large-scale trust-region subproblems when the quadratic model is defined by a limited-memory symmetric rank-one (L-SR1) quasi-Newton matrix. We propose a solver that exploits the compact representation of L-SR1 matrices. Our approach makes use of both an orthonormal basis for the eigenspace of the L-SR1 matrix and the Sherman-Morrison-Woodbury formula to compute global solutions to trust-region subproblems. To compute the optimal Lagrange multiplier for the trust-region constraint, we use Newton's method with a judicious initial guess that does not require safeguarding. A crucial property of this solver is that it is able to compute high-accuracy solutions even in the so-called hard case. Additionally, the optimal solution is determined directly by formula, not iteratively. Numerical experiments demonstrate the effectiveness of this solver.Comment: 2015-0

    Metoda quasi-Newton diagonală bazată pe minimizarea funcţiei Byrd-Nocedal pentru optimizare fără restricţii

    Get PDF
    A new quasi-Newton method with a diagonal updating matrix is suggested, where the diagonal elements are determined by minimizing the measure function of Byrd and Nocedal subject to the weak secant equation of Dennis and Wolkowicz. The Lagrange multiplier of this minimization problem is computed by using an adaptive procedure based on the conjugacy condition. The convergence of the algorithm is proved for twice differentiable, convex and bounded below functions using only the trace and the determinant. Using a set of 80 unconstrained optimization test problems and some applications from the MINPACK-2 collection we have the computational evidence that the algorithm is more efficient and more robust than the steepest descent, than the Barzilai and Borwein algorithm, than the Cauchy algorithm with Oren and Luenberger scaling and than the classical BFGS algorithms with the Wolfe line search conditions

    Improved Diagonal Hessian Approximations for Large-Scale Unconstrained Optimization

    Get PDF
    We consider some diagonal quasi-Newton methods for solving large-scale unconstrained optimization problems. A simple and effective approach for diagonal quasi-Newton algorithms is presented by proposing new updates of diagonal entries of the Hessian. Moreover, we suggest employing an extra BFGS update of the diagonal updating matrix and use its diagonal again. Numerical experiments on a collection of standard test problems show, in particular, that the proposed diagonal quasi-Newton methods perform substantially better than certain available diagonal methods

    Parallel projected variable metric algorithms for unconstrained optimization

    Get PDF
    The parallel variable metric optimization algorithms of Straeter (1973) and van Laarhoven (1985) are reviewed, and the possible drawbacks of the algorithms are noted. By including Davidon (1975) projections in the variable metric updating, researchers can generalize Straeter's algorithm to a family of parallel projected variable metric algorithms which do not suffer the above drawbacks and which retain quadratic termination. Finally researchers consider the numerical performance of one member of the family on several standard example problems and illustrate how the choice of the displacement vectors affects the performance of the algorithm
    corecore