Pruning back propagation neural networks using modern stochastic optimization techniques

Abstract

Approaches combining genetic algorithms and neural networks have received a great deal of attention in recent years. As a result, much work has been reported in two major areas of neural network design: training and topology optimisation. This paper focuses on the key issues associated with the problem of pruning a multilayer perceptron using genetic algorithms and simulated annealing. The study presented considers a number of aspects associated with network training that may alter the behaviour of a stochastic topology optimiser. Enhancements are discussed that can improve topology searches. Simulation results for the two mentioned stochastic optimisation methods applied to non-linear system identification are presented and compared with a simple random search

Similar works

Full text

thumbnail-image

Southampton (e-Prints Soton)

redirect
Last time updated on 02/07/2012

This paper was published in Southampton (e-Prints Soton).

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.