2 research outputs found

    Secant update version of quasi-Newton PSB with weighted multisecant equations

    Get PDF
    Quasi-Newton methods are often used in the frame of non-linear optimization. In those methods, the quality and cost of the estimate of the Hessian matrix has a major influence on the efficiency of the optimization algorithm, which has a huge impact for computationally costly problems. One strategy to create a more accurate estimate of the Hessian consists in maximizing the use of available information during this computation. This is done by combining different characteristics. The Powell-Symmetric-Broyden method (PSB) imposes, for example, the satisfaction of the last secant equation, which is called secant update property, and the symmetry of the Hessian (Powell in Nonlinear Programming 31-65, 1970). Imposing the satisfaction of more secant equations should be the next step to include more information into the Hessian. However, Schnabel proved that this is impossible (Schnabel in quasi-Newton methods using multiple secant equations, 1983). Penalized PSB (pPSB), works around the impossibility by giving a symmetric Hessian and penalizing the non-satisfaction of the multiple secant equations by using weight factors (Gratton et al. in Optim Methods Softw 30(4):748-755, 2015). Doing so, he loses the secant update property. In this paper, we combine the properties of PSB and pPSB by adding to pPSB the secant update property. This gives us the secant update penalized PSB (SUpPSB). This new formula that we propose also avoids matrix inversions, which makes it easier to compute. Next to that, SUpPSB also performs globally better compared to pPSB

    NLTGCR: A class of Nonlinear Acceleration Procedures based on Conjugate Residuals

    Full text link
    This paper develops a new class of nonlinear acceleration algorithms based on extending conjugate residual-type procedures from linear to nonlinear equations. The main algorithm has strong similarities with Anderson acceleration as well as with inexact Newton methods - depending on which variant is implemented. We prove theoretically and verify experimentally, on a variety of problems from simulation experiments to deep learning applications, that our method is a powerful accelerated iterative algorithm.Comment: Under Revie
    corecore