Towards a theoretical basis for modelling of hidden layer architecture in artificial neural networks

Abstract

Artificial neural networks (ANNs) arc mathematical and computational models that arc inspired by the biological neural systems. Just like biological neural networks become experts by learning from the surrounding, ANNs also have the ability to be experts in the particular area by training the network. Despite of their many advantages, there are some unsolved problems in applying artificial neural networks. Determine the most efficient architecture for the given task is identified as one of those major issues. This paper provides a pruning algorithm based on the baekpropagalion training algorithm to obtain the optimal solution of ANN. The pruning is done according to the synaptic pruning in biological neural system. Experiments were done with some well known problems in machine learning and artificial neural networks and results show that the new model performs better than the initial network in training data sets

    Similar works