6,490 research outputs found

    A Class of Diagonally Preconditioned Limited Memory Quasi-Newton Methods for Large-Scale Unconstrained Optimization

    Get PDF
    The focus of this thesis is to diagonally precondition on the limited memory quasi-Newton method for large scale unconstrained optimization problem. Particularly, the centre of discussion is on diagonally preconditioned limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method. L-BFGS method has been widely used in large scale unconstrained optimization due to its effectiveness. However, a major drawback of the L-BFGS method is that it can be very slow on certain type of problems. Scaling and preconditioning have been used to boost the performance of the L-BFGS method. In this study, a class of diagonally preconditioned L-BFGS method will be proposed. Contrary to the standard L-BFGS method where its initial inverse Hessian approximation is the identity matrix, a class of diagonal preconditioners has been derived based upon the weak-quasi-Newton relation with an additional parameter. Choosing different parameters leads the research to some well-known diagonal updating formulae which enable the R-linear convergent for the L-BFGS method. Numerical experiments were performed on a set of large scale unconstrained minimization problem to examine the impact of each choice of parameter. The computational results suggest that the proposed diagonally preconditioned L-BFGS methods outperform the standard L-BFGS method without any preconditioning. Finally, we discuss on the impact of the diagonal preconditioners on the L-BFGS method as compared to the standard L-BFGS method in terms of the number of iterations, the number of function/gradient evaluations and the CPU time in second

    Limited Memory BFGS method for Sparse and Large-Scale Nonlinear Optimization

    Get PDF
    Optimization-based control systems are used in many areas of application, including aerospace engineering, economics, robotics and automotive engineering. This work was motivated by the demand for a large-scale sparse solver for this problem class. The sparsity property of the problem is used for the computational efficiency regarding performance and memory consumption. This includes an efficient storing of the occurring matrices and vectors and an appropriate approximation of the Hessian matrix, which is the main subject of this work. Thus, a so-called the limited memory BFGS method has been developed. The limited memory BFGS method, has been implemented in a software library for solving the nonlinear optimization problems, WORHP. Its solving performance has been tested on different optimal control problems and test sets

    New Quasi-Newton Equation And Method Via Higher Order Tensor Models

    Get PDF
    This thesis introduces a general approach by proposing a new quasi-Newton (QN) equation via fourth order tensor model. To approximate the curvature of the objective function, more available information from the function-values and gradient is employed. The efficiency of the usual QN methods is improved by accelerating the performance of the algorithms without causing more storage demand. The presented equation allows the modification of several algorithms involving QN equations for practical optimization that possess superior convergence prop- erty. By using a new equation, the BFGS method is modified. This is done twice by employing two different strategies proposed by Zhang and Xu (2001) and Wei et al. (2006) to generate positive definite updates. The superiority of these methods compared to the standard BFGS and the modification proposed by Wei et al. (2006) is shown. Convergence analysis that gives the local and global convergence property of these methods and numerical results that shows the advantage of the modified QN methods are presented. Moreover, a new limited memory QN method to solve large scale unconstrained optimization is developed based on the modified BFGS updated formula. The comparison between this new method with that of the method developed by Xiao et al. (2008) shows better performance in numerical results for the new method. The global and local convergence properties of the new method on uniformly convex problems are also analyzed. The compact limited memory BFGS method is modified to solve the large scale unconstrained optimization problems. This method is derived from the proposed new QN update formula. The new method yields a more efficient algorithm compared to the standard limited memory BFGS with simple bounds (L-BFGS-B) method in the case of solving unconstrained problems. The implementation of the new proposed method on a set of test problems highlights that the derivation of this new method is more efficient in performing the standard algorithm

    Modified Quasi-Newton Methods For Large-Scale Unconstrained Optimization

    Get PDF
    The focus of this thesis is on finding the unconstrained minimizer of a function, when the dimension n is large. Specifically, we will focus on the wellknown class of optimization methods called the quasi-Newton methods. First we briefly give some mathematical background. Then we discuss the quasi-Newton's methods, the fundamental method in underlying most approaches to the problems of large-scale unconstrained optimization, as well as the related so-called line search methods. A review of the optimization methods currently available that can be used to solve large-scale problems is also given. The mam practical deficiency of quasi-Newton methods is the high computational cost for search directions, which is the key issue in large-scale unconstrained optimization. Due to the presence of this deficiency, we introduce a variety of techniques for improving the quasi-Newton methods for large-scale problems, including scaling the SR1 update, matrix-storage free methods and the extension of modified BFGS updates to limited-memory scheme. Comprehensive theoretical and experimental results are also given. Finally we comment on some achievements in our researches. Possible extensions are also given to conclude this thesis

    Towards large scale unconstrained optimization

    Get PDF
    A large scale unconstrained optimization problem can be formulated when the dimension n is large. The notion of 'large scale' is machine dependent and hence it could be difficult to state a priori when a problem is of large size. However, today an unconstrained problem with 400 or more variables is usually considered a large scale problem. The main difficulty in dealing with large scale problems is the fact that effective algorithms for small scale problems do not necessarily translate into efficient algorithms when applied to solve large scale problems. Therefore in dealing with large scale unconstrained problems with a large number of variables, modifications must be made to the standard implementation of the many existing algorithms for the small scale case. One of the most effective Newton-type methods for solving large-scale problems is the truncated Newton method. This method computes a Newton-type direction by truncating the conjugate Gradient method iterates (inner iterations) whenever a required accuracy is nobtained, thereby the superlinear convergence is guaranteed. Another effective approach to large-scale unconstrained is the limited memory BFGS method. This method satisfies the requirement to solve large-scale problems because the storage of matrices is avoided by storing a number of vector pairs. The symmetric rank one (SR1) update is of the simplest quasi-Newton updates for solving large-scale problems. However a basic disadvantage is that the SR1 update may not preserve the positive definiteness with a positive definiteness approximation. A simple restart procedure for the SR1 method using the standard line search to avoid the loss of positive definiteness will be implemented. The matrix-storage free BFGS (MF-BFGS) method is a method that combines with a restarting strategy to the BFGS method. We also attempt to construct a new matrix-storage free which uses the SR1 update (MF-SR1). The MF-SR1 method is more superior than the MF-BFGS method in some problems. However for other problems the MF-BFGS method is more competitive because of its rapid convergence. The matrix- storage methods can be gread accelerated by means of a simple scaling. Therefore, by a simple scaling on SR1 and BFGS methods, we can improve the methods tremendously
    corecore