Skip to main content
Article thumbnail
Location of Repository

Training feedforward neural networks using orthogonal iteration of the Hessian eigenvectors

By Andrew Hunter

Abstract

Introduction\ud Training algorithms for Multilayer Perceptrons optimize the set of W weights and biases, w, so as to minimize an\ud error function, E, applied to a set of N training patterns. The well-known back propagation algorithm combines an\ud efficient method of estimating the gradient of the error function in weight space, DE=g, with a simple gradient\ud descent procedure to adjust the weights, Dw = -hg. More efficient algorithms maintain the gradient estimation\ud procedure, but replace the update step with a faster non-linear optimization strategy [1].\ud Efficient non-linear optimization algorithms are based upon second order approximation [2]. When sufficiently\ud close to a minimum the error surface is approximately quadratic, the shape being determined by the Hessian matrix.\ud Bishop [1] presents a detailed discussion of the properties and significance of the Hessian matrix. In principle, if\ud sufficiently close to a minimum it is possible to move directly to the minimum using the Newton step, -H-1g.\ud In practice, the Newton step is not used as H-1 is very expensive to evaluate; in addition, when not sufficiently close\ud to a minimum, the Newton step may cause a disastrously poor step to be taken. Second order algorithms either build\ud up an approximation to H-1, or construct a search strategy that implicitly exploits its structure without evaluating it;\ud they also either take precautions to prevent steps that lead to a deterioration in error, or explicitly reject such steps.\ud In applying non-linear optimization algorithms to neural networks, a key consideration is the high-dimensional\ud nature of the search space. Neural networks with thousands of weights are not uncommon. Some algorithms have\ud O(W2) or O(W3) memory or execution times, and are hence impracticable in such cases. It is desirable to identify\ud algorithms that have limited memory requirements, particularly algorithms where one may trade memory usage\ud against convergence speed.\ud The paper describes a new training algorithm that has scalable memory requirements, which may range from O(W)\ud to O(W2), although in practice the useful range is limited to lower complexity levels. The algorithm is based upon a\ud novel iterative estimation of the principal eigen-subspace of the Hessian, together with a quadratic step estimation\ud procedure.\ud It is shown that the new algorithm has convergence time comparable to conjugate gradient descent, and may be\ud preferable if early stopping is used as it converges more quickly during the initial phases.\ud Section 2 overviews the principles of second order training algorithms. Section 3 introduces the new algorithm.\ud Second 4 discusses some experiments to confirm the algorithm's performance; section 5 concludes the paper

Topics: G730 Neural Computing
Year: 2000
DOI identifier: 10.1109/IJCNN.2000.857893
OAI identifier: oai:eprints.lincoln.ac.uk:1901

Suggested articles

Citations

  1. (1989). Classification of radar returns from the ionosphere using neural networks.
  2. (1994). Elementary Linear Algebra, ft’ Edition, doi
  3. (1988). FasEr-learning variations on back-propagation: an empirical study.
  4. (1994). Fast exact multiplication by the Hessian.Neural doi
  5. (1995). Neural Networksfor Pattern Recognition.
  6. (1992). Numerical Recipes in C: TheArt of Scientzjic Computing (Seconded.). doi
  7. (1996). Partial BFGS Update and Efficient Step-Length Calculation for Three-Layer Neural Networks, doi
  8. (1997). Second-Order Methodsfor Neural Networks. doi

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.