26,413 research outputs found

    Convergence and stability of line search methods for unconstrained optimization.

    Get PDF
    This paper explores the stability of general line search methods in the sense of Lyapunov, for minimizing a smooth nonlinear function. In particular we give sufficient conditions for a line search method to be globally asymptotical stable. Our analysis suggests that the proposed sufficient conditions for asymptotical stability is equivalent to the Zoutendijk-type conditions in conventional global convergence analysis

    A three-term conjugate gradient method with nonmonotone line search for unconstrained optimization

    Get PDF
    The technique of nonmontone line search has received much attention in nonlinear optimization. This technique can improve the computational cost of the line search process and increase the rate of convergence of the algorithm. However, the convergence of this line search scheme utilizes some rather restrictive assumption concerning the search directions, which may not hold for most conjugate gradient methods. Thus in this paper, we propose a three-term conjugate gradient method with nonmonotone backtracking line search technique for solving large scale unconstrained optimization problems. Convergence analysis of the proposed method is established under reasonable conditions. Numerical experiments carried out on benchmark test problems has clearly indicated the effectiveness of the developed algorithm in terms of efficiency and robustness

    Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search

    Get PDF
    Nonlinear conjugate gradient method is one of the useful methods for unconstrained optimization problems. In this paper, we consider three kinds of nonlinear conjugate gradient methods with Wolfe type line search for unstrained optimization problems. Under some mild assumptions, the global convergence results of the given methods are proposed. The numerical results show that the nonlinear conjugate gradient methods with Wolfe type line search are efficient for some unconstrained optimization problems

    Global Convergence of a Nonlinear Conjugate Gradient Method

    Get PDF
    A modified PRP nonlinear conjugate gradient method to solve unconstrained optimization problems is proposed. The important property of the proposed method is that the sufficient descent property is guaranteed independent of any line search. By the use of the Wolfe line search, the global convergence of the proposed method is established for nonconvex minimization. Numerical results show that the proposed method is effective and promising by comparing with the VPRP, CG-DESCENT, and DL+ methods

    New Inexact Line Search Method for Unconstrained Optimization

    Full text link
    We propose a new inexact line search rule and analyze the global convergence and convergence rate of related descent methods. The new line search rule is similar to the Armijo line-search rule and contains it as a special case. We can choose a larger stepsize in each line-search procedure and maintain the global convergence of related line-search methods. This idea can make us design new line-search methods in some wider sense. In some special cases, the new descent method can reduce to the Barzilai and Borewein method. Numerical results show that the new line-search methods are efficient for solving unconstrained optimization problems.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45195/1/10957_2005_Article_6553.pd

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension

    A COMPARATIVE STUDY OF SOME MODIFICATIONS OF CG METHODS UNDER EXACT LINE SEARCH

    Get PDF
    Conjugate Gradient (CG) method is a technique used in solving nonlinear unconstrained optimization problems. In this paper, we analysed the performance of two modifications and compared the results with the classical conjugate gradient methods of. These proposed methods possesse global convergence properties for general functions using exact line search. Numerical experiments show that the two modifications are more efficient for the test problems compared to classical CG coefficients

    Stability Analysis Of Continuous Conjugate Gradient Method

    Get PDF
    Kaedah Conjugate Gradient adalah sangat berguna untuk: menyelesaikan masalah tiada kekangan paling optimum yang berskala besar. Walaubagaimanapun, carlan garis (line search) dalam Kaedah Conjugate Gradient kadang-kadang sukar didapati dan pengiraannya menggunakan komputer adalah sangat mahal. Berdasarkan penyelidikan oleh Sun dan Zhang [J. Sun and J. Zhang (2001), Global convergence of conjugate gradient methods without line search], menyatakan bahawa Kaedah Conjugate Gradient adalah menumpu secara global (globally convergence) dengan menggunakan langkah (stepsize) ak yang ditetapkan berdasarkan formula 8r/ ft. Darlpada keputusan yang didapati, mereka mencadangkan carlan Ilpkll{4 garis (line search) adalah tidak diperlukan untuk mendapatkan penumpuan secara global (globally convergence) oleh Kaedah Conjugate Gradient. Oleh itu, objektif disertasi ini adalah untuk menentukan julat a dan P di mana julat ini akan memastikan kestabilan Kaedah Conjugate Gradient. In order to solve a large-scale unconstrained optimization, Conjugate Gradient Method has been proven to be successful. However, the line search required in Conjugate Gradient Method is sometimes extremely difficult and computationally expensive. Studies conducted by Sun and Zhang [J. Sun and J. Zhang (2001), Global convergence of conjugate gradient methods without line search], claimed that the Conjugate Gradient Method was globally convergence using "fixed" stepsize at determined using formula at = 8rk T fk . The result suggested that for global Ilpkl~ convergence of Conjugate Gradient Method, line search was not compUlsory. Therefore, tlfts dissertation's objective is to determine the range of a and P where this range will ensure the stability of Conjugate Gradient Method
    corecore