Location of Repository

Improved sign-based learning algorithm derived by the composite nonlinear Jacobi process

By A.D. Anastasiadis, George D. Magoulas and M.N. Vrahatis

Abstract

In this paper a globally convergent first-order training algorithm is proposed that uses sign-based information of the batch error measure in the framework of the nonlinear Jacobi process. This approach allows us to equip the recently proposed Jacobi–Rprop method with the global convergence property, i.e. convergence to a local minimizer from any initial starting point. We also propose a strategy that ensures the search direction of the globally convergent Jacobi–Rprop is a descent one. The behaviour of the algorithm is empirically investigated in eight benchmark problems. Simulation results verify that there are indeed improvements on the convergence success of the algorithm

Topics: csis
Publisher: Elsevier
Year: 2006
OAI identifier: oai:eprints.bbk.ac.uk.oai2:501

Suggested articles

Preview

Citations

  1. (1993). A direct adaptive method for faster backpropagation learning: The Rprop algorithm, In: doi
  2. (1993). A scaled conjugate gradient algorithm for fast supervised learning, doi
  3. (1994). Advanced supervised learning in multilayer perceptrons - from backpropagation to adaptive learning techniques, doi
  4. (2003). An efficient improvement of the Rprop algorithm, In:
  5. (1989). Approximation of Boolean functions by sigmoidal networks: Part I: XOR and other two variable functions, doi
  6. (1997). Better prediction of protein cellular localization sites with the k nearest neighbors classifier, In:
  7. (2003). Classification of protein localisation patterns via supervised neural network learning, In: doi
  8. (1969). Convergence conditions for ascent methods, doi
  9. (1997). Effective backpropagation training with variable stepsize, doi
  10. (2003). Empirical evaluation of the improved Rprop learning algorithms, doi
  11. (1992). Event dependent control of noise enhances learning in neural networks, doi
  12. (1992). First-and second-order methods for learning: Between steepest descent and Newton’s method, doi
  13. (2003). From linear to nonlinear iterative methods, doi
  14. (1999). Improving the convergence of the backpropagation algorithm using learning rate adaptation methods, doi
  15. (1985). Introduction to Non-linear Optimization,
  16. (1991). Introduction to the Theory of Neural Computation, doi
  17. (2001). Learning in multilayer perceptrons using global optimization strategies, Nonlinear Analysis: Theory, doi
  18. (1994). Minimization Methods for training feedforward neural networks. doi
  19. (1997). Neural network supervised training based on a dimension reducing method, doi
  20. (1995). Neural Networks for Pattern Recognition, doi
  21. (1994). Neural Networks: A Comprehensive Foundation, doi
  22. (1996). Numerical Methods for Unconstrained Optimization and Nonlinear Equations, doi
  23. (1992). On the problem of local minima in backpropagation, doi
  24. (2001). Optimal Solution of Nonlinear Equations, doi
  25. (2002). Physiological genomics of Escherichia coli protein families,
  26. (1994). PROBEN1-A set of benchmarks and benchmarking rules for neural network training algorithms,
  27. (1994). Rprop - Description and Implementation Details
  28. Sign-based learning schemes for pattern classification, doi
  29. (1998). Simulated annealing and weight decay in adaptive learning: The SARPROP algorithm, doi
  30. (1988). Solving systems of nonlinear equations using the nonzero value of the topological degree, doi
  31. Supervised training using global search doi
  32. (1992). Theory of algorithms for unconstrained optimization, doi
  33. (1994). UCI Repository of machine learning databases,

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.