2 research outputs found

    An efficient generalization of Battiti-Shanno's quasi-Newton algorithm for learning in MLP-networks

    Get PDF
    This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-forward neural network. The method is a generalization of Battiti's well known OSS algorithm. The aim of the proposed approach is to achieve a significant improvement both in terms of computational effort and in the capability of evaluating the global minimum of the error function. The technique described in this work is founded on the innovative concept of "convex algorithm" in order to avoid possible entrapments into local minima. Convergence results as well numerical experiences are presented

    An efficient generalization of Battiti-Shanno's quasi-Newton algorithm for learning in MLP-networks

    Get PDF
    This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-forward neural network. The method is a generalization of Battiti's well known OSS algorithm. The aim of the proposed approach is to achieve a significant improvement both in terms of computational effort and in the capability of evaluating the global minimum of the error function. The technique described in this work is founded on the innovative concept of "convex algorithm" in order to avoid possible entrapments into local minima. Convergence results as well numerical experiences are presented
    corecore