3 research outputs found

    Sign-methods for training with imprecise error function and gradient values

    No full text
    We present nonmonotone methods for feedforward neural network training, i.e., training methods in which error function values are allowed to increase at some iterations. More specifically, at each epoch we impose that the current error function value must satisfy an Armijo-type criterion, with respect to the maximum error function value of M previous epochs. A strategy to dynamically adapt M is suggested and two training algorithms with adaptive learning rates that successfully employ the above mentioned acceptability criterion are proposed. Experimental results show that the nonmonotone learning strategy improves the convergence speed and the success rate of the methods considered
    corecore