8 research outputs found

    A Simple Sufficient Descent Method for Unconstrained Optimization

    Get PDF
    We develop a sufficient descent method for solving large-scale unconstrained optimization problems. At each iteration, the search direction is a linear combination of the gradient at the current and the previous steps. An attractive property of this method is that the generated directions are always descent. Under some appropriate conditions, we show that the proposed method converges globally. Numerical experiments on some unconstrained minimization problems from CUTEr library are reported, which illustrate that the proposed method is promising

    Regularization of Limited Memory Quasi-Newton Methods for Large-Scale Nonconvex Minimization

    Full text link
    This paper deals with regularized Newton methods, a flexible class of unconstrained optimization algorithms that is competitive with line search and trust region methods and potentially combines attractive elements of both. The particular focus is on combining regularization with limited memory quasi-Newton methods by exploiting the special structure of limited memory algorithms. Global convergence of regularization methods is shown under mild assumptions and the details of regularized limited memory quasi-Newton updates are discussed including their compact representations. Numerical results using all large-scale test problems from the CUTEst collection indicate that our regularized version of L-BFGS is competitive with state-of-the-art line search and trust-region L-BFGS algorithms and previous attempts at combining L-BFGS with regularization, while potentially outperforming some of them, especially when nonmonotonicity is involved.Comment: 23 pages, 4 figure

    Secant update version of quasi-Newton PSB with weighted multisecant equations

    Get PDF
    Quasi-Newton methods are often used in the frame of non-linear optimization. In those methods, the quality and cost of the estimate of the Hessian matrix has a major influence on the efficiency of the optimization algorithm, which has a huge impact for computationally costly problems. One strategy to create a more accurate estimate of the Hessian consists in maximizing the use of available information during this computation. This is done by combining different characteristics. The Powell-Symmetric-Broyden method (PSB) imposes, for example, the satisfaction of the last secant equation, which is called secant update property, and the symmetry of the Hessian (Powell in Nonlinear Programming 31-65, 1970). Imposing the satisfaction of more secant equations should be the next step to include more information into the Hessian. However, Schnabel proved that this is impossible (Schnabel in quasi-Newton methods using multiple secant equations, 1983). Penalized PSB (pPSB), works around the impossibility by giving a symmetric Hessian and penalizing the non-satisfaction of the multiple secant equations by using weight factors (Gratton et al. in Optim Methods Softw 30(4):748-755, 2015). Doing so, he loses the secant update property. In this paper, we combine the properties of PSB and pPSB by adding to pPSB the secant update property. This gives us the secant update penalized PSB (SUpPSB). This new formula that we propose also avoids matrix inversions, which makes it easier to compute. Next to that, SUpPSB also performs globally better compared to pPSB

    A secant-based Nesterov method for convex functions

    Get PDF

    Metoda quasi-Newton diagonală bazată pe minimizarea funcţiei Byrd-Nocedal pentru optimizare fără restricţii

    Get PDF
    A new quasi-Newton method with a diagonal updating matrix is suggested, where the diagonal elements are determined by minimizing the measure function of Byrd and Nocedal subject to the weak secant equation of Dennis and Wolkowicz. The Lagrange multiplier of this minimization problem is computed by using an adaptive procedure based on the conjugacy condition. The convergence of the algorithm is proved for twice differentiable, convex and bounded below functions using only the trace and the determinant. Using a set of 80 unconstrained optimization test problems and some applications from the MINPACK-2 collection we have the computational evidence that the algorithm is more efficient and more robust than the steepest descent, than the Barzilai and Borwein algorithm, than the Cauchy algorithm with Oren and Luenberger scaling and than the classical BFGS algorithms with the Wolfe line search conditions

    Analytical study of the Least Squares Quasi-Newton method for interaction problems

    Get PDF
    Often in nature different systems interact, like fluids and structures, heat and electricity, populations of species, etc. It is our aim in this thesis to find, describe and analyze solution methods to solve the equations resulting from the mathematical models describing those interacting systems. Even if powerful solvers often already exist for problems in a single physical domain (e.g. structural or fluid problems), the development of similar tools for multi-physics problems is still ongoing. When the interaction (or coupling) between the two systems is strong, many methods still fail or are computationally very expensive. Approaches for solving these multi-physics problems can be broadly put in two categories: monolithic or partitioned. While we are not claiming that the partitioned approach is panacea for all coupled problems, we will only focus our attention in this thesis on studying methods to solve (strongly) coupled problems with a partitioned approach in which each of the physical problems is solved with a specialized code that we consider to be a black box solver and of which the Jacobian is unknown. We also assume that calling these black boxes is the most expensive part of any algorithm, so that performance is judged by the number of times these are called. In 2005 Vierendeels presented a new coupling procedure for this partitioned approach in a fluid-structure interaction context, based on sensitivity analysis of the important displacement and pressure modes which are detected during the iteration process. This approach only uses input-output couples of the solvers (one for the fluid problem and one for the structural problem). In this thesis we will focus on establishing the properties of this method and show that it can be interpreted as a block quasi-Newton method with approximate Jacobians based on a least squares formulation. We also establish and investigate other algorithms that exploit the original idea but use a single approximate Jacobian. The main focus in this thesis lies on establishing the algebraic properties of the methods under investigation and not so much on the best implementation form

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    Fragments d'Optimisation Différentiable - Théories et Algorithmes

    Get PDF
    MasterLecture Notes (in French) of optimization courses given at ENSTA (Paris, next Saclay), ENSAE (Paris) and at the universities Paris I, Paris VI and Paris Saclay (979 pages).Syllabus d’enseignements délivrés à l’ENSTA (Paris, puis Saclay), à l’ENSAE (Paris) et aux universités Paris I, Paris VI et Paris Saclay (979 pages)
    corecore