8 research outputs found

    Toward a More Robust Pruning Procedure for MLP Networks

    Get PDF
    Choosing a proper neural network architecture is a problem of great practical importance. Smaller models mean not only simpler designs but also lower variance for parameter estimation and network prediction. The widespread utilization of neural networks in modeling highlights an issue in human factors. The procedure of building neural models should find an appropriate level of model complexity in a more or less automatic fashion to make it less prone to human subjectivity. In this paper we present a Singular Value Decomposition based node elimination technique and enhanced implementation of the Optimal Brain Surgeon algorithm. Combining both methods creates a powerful pruning engine that can be used for tuning feedforward connectionist models. The performance of the proposed method is demonstrated by adjusting the structure of a multi-input multi-output model used to calibrate a six-component wind tunnel strain gage

    Pruning back propagation neural networks using modern stochastic optimization techniques

    No full text
    Approaches combining genetic algorithms and neural networks have received a great deal of attention in recent years. As a result, much work has been reported in two major areas of neural network design: training and topology optimisation. This paper focuses on the key issues associated with the problem of pruning a multilayer perceptron using genetic algorithms and simulated annealing. The study presented considers a number of aspects associated with network training that may alter the behaviour of a stochastic topology optimiser. Enhancements are discussed that can improve topology searches. Simulation results for the two mentioned stochastic optimisation methods applied to non-linear system identification are presented and compared with a simple random search

    Topology Design of Feedforward Neural Networks By Genetic Algorithms

    No full text
    . For many applications feedforward neural networks have proved to be a valuable tool. Although the basic principles of employing such networks are quite straightforward, the problem of tuning their architectures to achieve near optimal performance still remains a very challenging task. Genetic algorithms may be used to solve this problem, since they have a number of distinct features that are useful in this context. First, the approach is quite universal and can be applied to many different types of neural networks or training criteria. It also allows network topologies to be optimized at various level of detail and can be used with many types of energy function, even those that are discontinuous or non-differentiable. Finally, a genetic algorithm need not be limited to simply adjusting patterns of connections, but, for example, can be utilized to select node transfer functions, weight values or to find architectures that perform best under certain simulated working conditions. In thi..
    corecore