5 research outputs found

    A new conjugate gradient method based on the modified secant equations

    Get PDF
    Abstract. Based on the secant equations proposed by Zhang, Deng and Chen, we propose a new nonlinear conjugate gradient method for unconstrained optimization problems. Global convergence of this method is established under some proper conditions

    A Three-Term Conjugate Gradient Method with Sufficient Descent Property for Unconstrained Optimization

    Get PDF
    Conjugate gradient methods are widely used for solving large-scale unconstrained optimization problems, because they do not need the storage of matrices. In this paper, we propose a general form of three-term conjugate gradient methods which always generate a sufficient descent direction. We give a sufficient condition for the global convergence of the proposed general method. Moreover, we present a specific three-term conjugate gradient method based on the multi-step quasi-Newton method. Finally, some numerical results of the proposed method are given

    An Advanced Conjugate Gradient Training Algorithm Based on a Modified Secant Equation

    Get PDF

    An extended Dai-Liao conjugate gradient method with global convergence for nonconvex functions

    Get PDF
    Using an extension of some previously proposed modified secant equations in the Dai-Liao approach, a modified nonlinear conjugate gradient method is proposed. As interesting features, the method employs the objective function values in addition to the gradient information and satisfies the sufficient descent property with proper choices for its parameter. Global convergence of the method is established without convexity assumption on the objective function. Results of numerical comparisons are reported. They demonstrate efficiency of the proposed method in the sense of the Dolan-Moré performance profile

    Empirical analysis of neural networks training optimisation

    Get PDF
    A Dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Mathematical Statistics,School of Statistics and Actuarial Science. October 2016.Neural networks (NNs) may be characterised by complex error functions with attributes such as saddle-points, local minima, even-spots and plateaus. This complicates the associated training process in terms of efficiency, convergence and accuracy given that it is done by minimising such complex error functions. This study empirically investigates the performance of two NNs training algorithms which are based on unconstrained and global optimisation theories, i.e. the Resilient propagation (Rprop) and the Conjugate Gradient with Polak-Ribière updates (CGP). It also shows how the network structure plays a role in the training optimisation of NNs. In this regard, various training scenarios are used to classify two protein data, i.e. the Escherichia coli and Yeast data. These training scenarios use varying numbers of hidden nodes and training iterations. The results show that Rprop outperforms CGP. Moreover, it appears that the performance of classifiers varies under various training scenarios.LG201
    corecore