276 research outputs found

    Preconditioned Nonlinear Conjugate Gradient methods based on a modified secant equation

    Get PDF
    This paper includes a twofold result for the Nonlinear Conjugate Gradient (NCG) method, in large scale unconstrained optimization. First we consider a theoretical analysis, where preconditioning is embedded in a strong convergence framework of an NCG method from the literature. Mild conditions to be satisfied by the preconditioners are defined, in order to preserve NCG convergence. As a second task, we also detail the use of novel matrix-free preconditioners for NCG. Our proposals are based on quasi-Newton updates, and either satisfy the secant equation or a secant-like condition at some of the previous iterates. We show that, in some sense, the preconditioners we propose also approximate the inverse of the Hessian matrix. In particular, the structures of our preconditioners depend on low-rank updates used, along with different choices of specific parameters. The low-rank updates are obtained as by-product of NCG iterations. The results of an extended numerical experience using large scale CUTEst problems is reported, showing that our preconditioners can considerably improve the performance of NCG methods

    Composing Scalable Nonlinear Algebraic Solvers

    Get PDF
    Most efficient linear solvers use composable algorithmic components, with the most common model being the combination of a Krylov accelerator and one or more preconditioners. A similar set of concepts may be used for nonlinear algebraic systems, where nonlinear composition of different nonlinear solvers may significantly improve the time to solution. We describe the basic concepts of nonlinear composition and preconditioning and present a number of solvers applicable to nonlinear partial differential equations. We have developed a software framework in order to easily explore the possible combinations of solvers. We show that the performance gains from using composed solvers can be substantial compared with gains from standard Newton-Krylov methods.Comment: 29 pages, 14 figures, 13 table

    Preconditioned subspace quasi-newton method for large scale optimization

    Get PDF
    Subspace quasi-Newton (SQN) method has been widely used in large scale unconstrained optimization problem. Its popularity is due to the fact that the method can construct subproblems in low dimensions so that storage requirement as well as the computation cost can be minimized. However, the main drawback of the SQN method is that it can be very slow on certain types of non-linear problem such as ill-conditioned problems. Hence, we proposed a preconditioned SQN method, which is generally more effective than the SQN method. In order to achieve this, we proposed that a diagonal updating matrix that was derived based on the weak secant relation be used instead of the identity matrix to approximate the initial inverse Hessian. Our numerical results show that the proposed preconditioned SQN method performs better than the SQN method which is without preconditioning

    Some diagonal preconditioners for limited memory quasi-Newton method for large Scale optimization

    Get PDF
    One of the well-known methods in solving large scale unconstrained optimization is limited memory quasi-Newton (LMQN) method. This method is derived from a subproblem in low dimension so that the storage requirement as well as the computation cost can be reduced. In this paper, we propose a preconditioned LMQN method which is generally more effective than the LMQN method dueto the main defect of the LMQN method that it can be very slow on certain type of nonlinear problem such as ill-conditioned problems. In order to do this, we propose to use a diagonal updating matrix that has been derived based on the weak quasi-Newton relation to replace the identity matrix to approximate the initial inverse Hessian. The computational results show that the proposed preconditioned LMQN method performs better than LMQN method that without preconditioning

    Exploiting damped techniques for nonlinear conjugate gradient methods

    Get PDF
    In this paper we propose the use of damped techniques within Nonlinear Conjugate Gradient (NCG) methods. Damped techniques were introduced by Powell and recently reproposed by Al-Baali and till now, only applied in the framework of quasi–Newton methods. We extend their use to NCG methods in large scale unconstrained optimization, aiming at possibly improving the efficiency and the robustness of the latter methods, especially when solving difficult problems. We consider both unpreconditioned and Preconditioned NCG (PNCG). In the latter case, we embed damped techniques within a class of preconditioners based on quasi–Newton updates. Our purpose is to possibly provide efficient preconditioners which approximate, in some sense, the inverse of the Hessian matrix, while still preserving information provided by the secant equation or some of its modifications. The results of an extensive numerical experience highlights that the proposed approach is quite promising
    corecore