10,227 research outputs found

    A new three-term conjugate gradient-based projection method for solving large-scale nonlinear monotone equations

    Get PDF
    A new three-term conjugate gradient-based projection method is presented in this paper for solving large-scale nonlinear monotone equations. This method is derivative-free and it is suitable for solving large-scale nonlinear monotone equations due to its lower storage requirements. The method satisfies the sufficient descent condition , where  is a constant, and its global convergence is also established. Numerical results show that the method is efficient and promising

    Improved Fletcher-Reeves Methods Based on New Scaling Techniques

    Get PDF
    This paper introduces a scaling parameter to the Fletcher-Reeves (FR) nonlinear conjugate gradient method. The main aim is to improve its theoretical and numerical properties when applied with inexact line searches to unconstrained optimization problems. We show that the sufficient descent and global convergence properties of Al-Baali for the FR method with a fairly accurate line search are maintained. We also consider the possibility of extending this result to less accurate line search for appropriate values of the scaling parameter. The reported numerical results show that several values for the proposed scaling parameter improve the performance of the FR method significantly

    An extended Dai-Liao conjugate gradient method with global convergence for nonconvex functions

    Get PDF
    Using an extension of some previously proposed modified secant equations in the Dai-Liao approach, a modified nonlinear conjugate gradient method is proposed. As interesting features, the method employs the objective function values in addition to the gradient information and satisfies the sufficient descent property with proper choices for its parameter. Global convergence of the method is established without convexity assumption on the objective function. Results of numerical comparisons are reported. They demonstrate efficiency of the proposed method in the sense of the Dolan-Moré performance profile

    Accelerated Methods for α\alpha-Weakly-Quasi-Convex Problems

    Full text link
    Many problems encountered in training neural networks are non-convex. However, some of them satisfy conditions weaker than convexity, but which are still sufficient to guarantee the convergence of some first-order methods. In our work we show that some previously known first-order methods retain their convergence rates under these weaker conditions

    A new, globally convergent Riemannian conjugate gradient method

    Get PDF
    This article deals with the conjugate gradient method on a Riemannian manifold with interest in global convergence analysis. The existing conjugate gradient algorithms on a manifold endowed with a vector transport need the assumption that the vector transport does not increase the norm of tangent vectors, in order to confirm that generated sequences have a global convergence property. In this article, the notion of a scaled vector transport is introduced to improve the algorithm so that the generated sequences may have a global convergence property under a relaxed assumption. In the proposed algorithm, the transported vector is rescaled in case its norm has increased during the transport. The global convergence is theoretically proved and numerically observed with examples. In fact, numerical experiments show that there exist minimization problems for which the existing algorithm generates divergent sequences, but the proposed algorithm generates convergent sequences.Comment: 22 pages, 8 figure
    corecore