16,696 research outputs found

    A Three-Term Conjugate Gradient Method with Sufficient Descent Property for Unconstrained Optimization

    Get PDF
    Conjugate gradient methods are widely used for solving large-scale unconstrained optimization problems, because they do not need the storage of matrices. In this paper, we propose a general form of three-term conjugate gradient methods which always generate a sufficient descent direction. We give a sufficient condition for the global convergence of the proposed general method. Moreover, we present a specific three-term conjugate gradient method based on the multi-step quasi-Newton method. Finally, some numerical results of the proposed method are given

    Global Convergence of a Nonlinear Conjugate Gradient Method

    Get PDF
    A modified PRP nonlinear conjugate gradient method to solve unconstrained optimization problems is proposed. The important property of the proposed method is that the sufficient descent property is guaranteed independent of any line search. By the use of the Wolfe line search, the global convergence of the proposed method is established for nonconvex minimization. Numerical results show that the proposed method is effective and promising by comparing with the VPRP, CG-DESCENT, and DL+ methods

    Convergence of the steepest descent method with line searches and uniformly convex objective in reflexive Banach spaces

    Get PDF
    In this paper, we present some algorithms for unconstrained convex optimization problems. The development and analysis of these methods is carried out in a Banach space setting. We begin by introducing a general framework for achieving global convergence without Lipschitz conditions on the gradient, as usual in the current literature. This paper is an extension to Banach spaces to the analysis of the steepest descent method for convex optimization, most of them in less general spaces

    A New Method with Sufficient Descent Property for Unconstrained Optimization

    Get PDF
    Recently, sufficient descent property plays an important role in the global convergence analysis of some iterative methods. In this paper, we propose a new iterative method for solving unconstrained optimization problems. This method provides a sufficient descent direction for objective function. Moreover, the global convergence of the proposed method is established under some appropriate conditions. We also report some numerical results and compare the performance of the proposed method with some existing methods. Numerical results indicate that the presented method is efficient

    New Inexact Line Search Method for Unconstrained Optimization

    Full text link
    We propose a new inexact line search rule and analyze the global convergence and convergence rate of related descent methods. The new line search rule is similar to the Armijo line-search rule and contains it as a special case. We can choose a larger stepsize in each line-search procedure and maintain the global convergence of related line-search methods. This idea can make us design new line-search methods in some wider sense. In some special cases, the new descent method can reduce to the Barzilai and Borewein method. Numerical results show that the new line-search methods are efficient for solving unconstrained optimization problems.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45195/1/10957_2005_Article_6553.pd

    Trust-region based methods for unconstrained global optimization

    Get PDF
    Convexity is an essential characteristic in optimization. In reality, many optimization problems are not unimodal which make their feasible regions to be non-convex. These conditions lead to hard global optimization issues even in low dimension. In this study, two trusted-region based methods are developed to deal with such problems. The developed methods utilize interval technique to find regions where minimizers reside. These identified regions are convex with at least one local minimizer. The developed methods have been proven to satisfy descent property, global convergence and low time complexities. Some benchmark functions with diverse properties have been used in the simulation of the developed methods. The simulation results show that the methods can successfully identify all the global minimizers of the unconstrained non-convex benchmark functions. This study can be extended to solve constrained optimization problems for future work

    Globally convergent algorithms for solving unconstrained optimization problems

    Get PDF
    New algorithms for solving unconstrained optimization problems are presented based on the idea of combining two types of descent directions: the direction of anti-gradient and either the Newton or quasi-Newton directions. The use of latter directions allows one to improve the convergence rate. Global and superlinear convergence properties of these algorithms are established. Numerical experiments using some unconstrained test problems are reported. Also, the proposed algorithms are compared with some existing similar methods using results of experiments. This comparison demonstrates the efficiency of the proposed combined methods
    corecore