187 research outputs found

    A Three-Term Conjugate Gradient Method with Sufficient Descent Property for Unconstrained Optimization

    Get PDF
    Conjugate gradient methods are widely used for solving large-scale unconstrained optimization problems, because they do not need the storage of matrices. In this paper, we propose a general form of three-term conjugate gradient methods which always generate a sufficient descent direction. We give a sufficient condition for the global convergence of the proposed general method. Moreover, we present a specific three-term conjugate gradient method based on the multi-step quasi-Newton method. Finally, some numerical results of the proposed method are given

    Global convergence of new conjugate gradient method with inexact line search

    Get PDF
    In this paper, We propose a new nonlinear conjugate gradient method (FRA) that satisfies a sufficient descent condition and global convergence under the inexact line search of strong wolf powell. Our numerical experiment shaw the efficiency of the new method in solving a set of problems from the CUTEst package, the proposed new formula gives excellent numerical results at CPU time, number of iterations, number of gradient ratings when compared to WYL, DY, PRP, and FR methods

    Solving Unconstrained Optimization Problems by a New Conjugate Gradient Method with Sufficient Descent Property

    Get PDF
    There have been some conjugate gradient methods with strong convergence but numerical instability and conversely‎. Improving these methods is an interesting idea to produce new methods with both strong convergence and‎‏  ‎numerical stability‎. ‎In this paper‎, ‎a new hybrid conjugate gradient method is introduced based on the Fletcher ‎formula (CD) with strong convergence and the Liu and Storey formula (LS) with good numerical results‎. ‎New directions satisfy the sufficient descent property‎, ‎independent of line search‎. ‎Under some mild assumptions‎, ‎the global convergence of new hybrid method is proved‎. ‎Numerical results on unconstrained CUTEst test problems show that the new algorithm is ‎very robust and efficient‎

    Solving Optimal Control Problem of Monodomain Model Using Hybrid Conjugate Gradient Methods

    Get PDF
    We present the numerical solutions for the PDE-constrained optimization problem arising in cardiac electrophysiology, that is, the optimal control problem of monodomain model. The optimal control problem of monodomain model is a nonlinear optimization problem that is constrained by the monodomain model. The monodomain model consists of a parabolic partial differential equation coupled to a system of nonlinear ordinary differential equations, which has been widely used for simulating cardiac electrical activity. Our control objective is to dampen the excitation wavefront using optimal applied extracellular current. Two hybrid conjugate gradient methods are employed for computing the optimal applied extracellular current, namely, the Hestenes-Stiefel-Dai-Yuan (HS-DY) method and the Liu-Storey-Conjugate-Descent (LS-CD) method. Our experiment results show that the excitation wavefronts are successfully dampened out when these methods are used. Our experiment results also show that the hybrid conjugate gradient methods are superior to the classical conjugate gradient methods when Armijo line search is used

    Adaptive Localized Cayley Parametrization for Optimization over Stiefel Manifold

    Full text link
    We present an adaptive parametrization strategy for optimization problems over the Stiefel manifold by using generalized Cayley transforms to utilize powerful Euclidean optimization algorithms efficiently. The generalized Cayley transform can translate an open dense subset of the Stiefel manifold into a vector space, and the open dense subset is determined according to a tunable parameter called a center point. With the generalized Cayley transform, we recently proposed the naive Cayley parametrization, which reformulates the optimization problem over the Stiefel manifold as that over the vector space. Although this reformulation enables us to transplant powerful Euclidean optimization algorithms, their convergences may become slow by a poor choice of center points. To avoid such a slow convergence, in this paper, we propose to estimate adaptively 'good' center points so that the reformulated problem can be solved faster. We also present a unified convergence analysis, regarding the gradient, in cases where fairly standard Euclidean optimization algorithms are employed in the proposed adaptive parametrization strategy. Numerical experiments demonstrate that (i) the proposed strategy succeeds in escaping from the slow convergence observed in the naive Cayley parametrization strategy; (ii) the proposed strategy outperforms the standard strategy which employs a retraction.Comment: 29 pages, 4 figures, 4 table

    A New Conjugate Gradient Coefficient for Large Scale Nonlinear Unconstrained Optimization

    Get PDF
    Abstract Conjugate gradient (CG) methods have played an important role in solving largescale unconstrained optimization due to its low memory requirements and global convergence properties. Numerous studies and modifications have been devoted recently to improve this method. In this paper, a new modification of conjugate gradient coefficient ( k β ) with global convergence properties are presented. The global convergence result is established using exact line searches. Preliminary result shows that the proposed formula is competitive when compared to the other CG coefficients. Mathematics Subject Classification: 65K10, 49M3
    corecore