88 research outputs found

    Numerical Comparison of Line Search Criteria in Nonlinear Conjugate Gradient Algorithms

    Get PDF
    One of the open problems known to researchers on the application of nonlinear conjugate gradient methods for addressing unconstrained optimization problems is the influence of accuracy of linear search procedure on the performance of the conjugate gradient algorithm. Key to any CG algorithm is the computation of an optimalstep size for which many procedures have been postulated. In this paper, we assess and compare the performance of a modified Armijo and Wolfe line search procedures on three variants of nonlinear CGM by carrying out a numerical test. Experiments reveal that our modified procedure and the strong Wolfe procedures guaranteed fast convergence

    Solving Optimal Control Problem of Monodomain Model Using Hybrid Conjugate Gradient Methods

    Get PDF
    We present the numerical solutions for the PDE-constrained optimization problem arising in cardiac electrophysiology, that is, the optimal control problem of monodomain model. The optimal control problem of monodomain model is a nonlinear optimization problem that is constrained by the monodomain model. The monodomain model consists of a parabolic partial differential equation coupled to a system of nonlinear ordinary differential equations, which has been widely used for simulating cardiac electrical activity. Our control objective is to dampen the excitation wavefront using optimal applied extracellular current. Two hybrid conjugate gradient methods are employed for computing the optimal applied extracellular current, namely, the Hestenes-Stiefel-Dai-Yuan (HS-DY) method and the Liu-Storey-Conjugate-Descent (LS-CD) method. Our experiment results show that the excitation wavefronts are successfully dampened out when these methods are used. Our experiment results also show that the hybrid conjugate gradient methods are superior to the classical conjugate gradient methods when Armijo line search is used

    Unconstrained Optimization Methods: Conjugate Gradient Methods and Trust-Region Methods

    Get PDF
    Here, we consider two important classes of unconstrained optimization methods: conjugate gradient methods and trust region methods. These two classes of methods are very interesting; it seems that they are never out of date. First, we consider conjugate gradient methods. We also illustrate the practical behavior of some conjugate gradient methods. Then, we study trust region methods. Considering these two classes of methods, we analyze some recent results

    Modification of Nonlinear Conjugate Gradient Method with Weak Wolfe-Powell Line Search

    Get PDF
    Conjugate gradient (CG) method is used to find the optimum solution for the large scale unconstrained optimization problems. Based on its simple algorithm, low memory requirement, and the speed of obtaining the solution, this method is widely used in many fields, such as engineering, computer science, and medical science. In this paper, we modified CG method to achieve the global convergence with various line searches. In addition, it passes the sufficient descent condition without any line search. The numerical computations under weak Wolfe-Powell line search shows that the efficiency of the new method is superior to other conventional methods

    A New CG-Algorithm with Self-Scaling VM-Update for Unconstraint Optimization

    Get PDF
    In this paper, a new combined extended Conjugate-Gradient (CG) and Variable-Metric (VM) methods is proposed for solving unconstrained large-scale numerical optimization problems. The basic idea is to choose a combination of the current gradient and some pervious search directions as a new search direction updated by Al-Bayati\u27s SCVM-method to fit a new step-size parameter using Armijo Inexact Line Searches (ILS). This method is based on the ILS and its numerical properties are discussed using different non-linear test functions with various dimensions. The global convergence property of the new algorithm is investigated under few weak conditions. Numerical experiments show that the new algorithm seems to converge faster and is superior to some other similar methods in many situations
    • …
    corecore