23 research outputs found

    Solving Unconstrained Optimization Problems by a New Conjugate Gradient Method with Sufficient Descent Property

    Get PDF
    There have been some conjugate gradient methods with strong convergence but numerical instability and conversely‎. Improving these methods is an interesting idea to produce new methods with both strong convergence and‎‏  â€Žnumerical stability‎. ‎In this paper‎, ‎a new hybrid conjugate gradient method is introduced based on the Fletcher ‎formula (CD) with strong convergence and the Liu and Storey formula (LS) with good numerical results‎. ‎New directions satisfy the sufficient descent property‎, ‎independent of line search‎. ‎Under some mild assumptions‎, ‎the global convergence of new hybrid method is proved‎. ‎Numerical results on unconstrained CUTEst test problems show that the new algorithm is ‎very robust and efficient‎

    Modified parameter of Dai Liao conjugacy condition of the conjugate gradient method

    Full text link
    The conjugate gradient (CG) method is widely used for solving nonlinear unconstrained optimization problems because it requires less memory to implement. In this paper, we propose a new parameter of the Dai Liao conjugacy condition of the CG method with the restart property, which depends on the Lipschitz constant and is related to the Hestenes Stiefel method. The proposed method satisfies the descent condition and global convergence properties for convex and non-convex functions. In the numerical experiment, we compare the new method with CG_Descent using more than 200 functions from the CUTEst library. The comparison results show that the new method outperforms CG Descent in terms of CPU time, number of iterations, number of gradient evaluations, and number of function evaluations.Comment: 20 Pages, 4 figure

    The Mini-batch Stochastic Conjugate Algorithms with the unbiasedness and Minimized Variance Reduction

    Full text link
    We firstly propose the new stochastic gradient estimate of unbiasedness and minimized variance in this paper. Secondly, we propose the two algorithms: Algorithml and Algorithm2 which apply the new stochastic gradient estimate to modern stochastic conjugate gradient algorithms SCGA 7and CGVR 8. Then we prove that the proposed algorithms can obtain linearconvergence rate under assumptions of strong convexity and smoothness. Finally, numerical experiments show that the new stochastic gradient estimatecan reduce variance of stochastic gradient effectively. And our algorithms compared SCGA and CGVR can convergent faster in numerical experimentson ridge regression model.Comment: 17 pages, 3 figure
    corecore