Abstract. In this work, an efficient training algorithm for feedforward neural networks is presented. It is based on a scaled version of the conjugate gradient method suggested by Perry, which employs the spectral steplength of Barzilai and Borwein that contains second order information without estimating the Hessian matrix. The learning rate is automatically adapted at each epoch, using the conjugate gradient values and the learning rate of the previous one. In addition, a new acceptability criterion for the learning rate is utilized based on non-monotone Wolfe conditions. The efficiency of the training algorithm is proved on the standard tests, including XOR, 3-bit parity, font learning and function approximation problems.
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.