8 research outputs found

    Mathematical modelling of hidden layer architecture in artificial neural networks

    No full text
    The performance of an Artificial Neural Network (ANN) strongly depends on its hidden layer architecture. The generated solution by an ANN does not guarantee that it has always been devised with the simplest neural network architecture suitable for modeling the particular problem. This results in computational complexity of training of an ANN, deployment, and usage of the trained network. Therefore, modeling the hidden layer architecture of an ANN remains as a research challenge. This thesis presents a theoretically-based approach to prune hidden layers of trained artificial neural networks, ensuring better or the same performance of a simpler network as compared with the original network. The method described in the thesis is inspired by the finding from neuroscience that the human brain has a neural network with nearly 100 billion neurons, yet our activities are performed by a much simpler neural network with a much lesser number of neurons. Furthermore, in biological neural networks, the neurons which do not significantly contribute to the performance of the network will naturally be disregarded. According to neuroplasticity, biological neural networks can also solicit activations of neurons in the proximity of the active neural network to improve the performance of the network. On the same token, it is hypothesized that for a given complex-trained ANN, we can discover an ANN, which is much more simplified than the original given architecture. This research has discovered a theory to reduce certain number of hidden layers and to eliminate disregarding neurons from the remaining hidden layers of a given ANN architecture. The procedure begins with a complex neural network architecture trained with backpropagation algorithm and reach to the optimum solution by two phases. First, the number of hidden layers is determined by using a peak search algorithm discovered by this research. The newly discovered simpler network with lesser number of hidden layers and highest generalization power considered for pruning of its hidden neurons. The pruning of neurons in the hidden layers has been theorized by identifying the neurons, which give least contribution to the network performances. These neurons are identified by detecting the correlations regarding minimization of error in training. Experiments have shown that the simplified network architecture generated by this approach exhibits same or better performance as compared with the original large network architecture. Generally, it reduces more than 80% of neurons while increasing the generalization by about 30%. As such, the proposed approach can be used to discover simple network architecture relevant to a given complex architecture of an ANN solution. Due to its architectural simplicity, the new architecture has been computationally efficient in training, usage and further training

    Towards a theoretical basis for modelling of hidden layer architecture in artificial neural networks

    No full text
    Artificial neural networks (ANNs) arc mathematical and computational models that arc inspired by the biological neural systems. Just like biological neural networks become experts by learning from the surrounding, ANNs also have the ability to be experts in the particular area by training the network. Despite of their many advantages, there are some unsolved problems in applying artificial neural networks. Determine the most efficient architecture for the given task is identified as one of those major issues. This paper provides a pruning algorithm based on the baekpropagalion training algorithm to obtain the optimal solution of ANN. The pruning is done according to the synaptic pruning in biological neural system. Experiments were done with some well known problems in machine learning and artificial neural networks and results show that the new model performs better than the initial network in training data sets

    Optimization of multi-layer artificial neural networks using delta values of hidden layers

    No full text
    The number of hidden layers is crucial in multilayer artificial neural networks. In general, generalization power of the solution can be improved by increasing the number of layers. This paper presents a new method to determine the optimal architecture by using a pruning technique. The unimportant neurons are identified by using the delta values of hidden layers. The modified network contains fewer numbers of neurons in network and shows better generalization. Moreover, it has improved the speed relative to the back propagation training. The experiments have been done with number of test problems to verify the effectiveness of new approach

    Novel Technique for Optimizing the hidden layer architecture in Artificial Neural Networks

    No full text
    The architecture of an artificial neural network has a great impact on the generalization power. More precisely, by changing the number of layers and neurons in each hidden layer generalization ability can be significantly changed. Therefore, the architecture is crucial in artificial neural network and hence, determining the hidden layer architecture has become a research challenge. In this paper a pruning technique has been presented to obtain an appropriate architecture based on the backpropagation training algorithm. Pruning is done by using the delta values of hidden layers. The proposed method has been tested with several benchmark problems in artificial neural networks and machine learning. The experimental results have been shown that the modified algorithm reduces the size of the network without degrading the performance. Also it tends to the desired error faster than the backpropagation algorithm.Keywords: , , , , hidde

    Mathematical modeling of hidden layer architecture in artificial neural networks

    No full text
    The performance of a multilayer artificial neural network is very much depends on the architecture of the hidden layers. Therefore, modeling of hidden layer architecture has become a research challenge. At present most of the models of hidden layer architecture have been confined to neural networks with one hidden layer. However, this approach may not be the most appropriate solution for the given task. In this research we have come up with an approach to model hidden layer architecture with arbitrary number of layers and neurons. An approach has been presented to trim the hidden layer architecture during the training cycle while meets the pre-defined error rate. The experiments show that new theory can train artificial neural networks with lesser training time through a simpler architecture that maintains the same error rate as the Back propagation

    Improved quality of management of eclampsia patients through criteria based audit at Muhimbili National Hospital, Dar es Salaam, Tanzania : Bridging the quality gap

    Get PDF
    Background: Criteria-based audits (CBA) have been used to improve clinical management in developed countries, but have only recently been introduced in the developing world. This study discusses the use of a CBA to improve quality of care among eclampsia patients admitted at a University teaching hospital in Dar es Salaam Tanzania. Objective: The prevalence of eclampsia in MNH is high (approximate to 6%) with the majority of cases arriving after start of convulsions. In 2004-2005 the case-fatality rate in eclampsia was 5.1% of all pregnant women admitted for delivery (MNH obstetric data base). A criteria-based audit (CBA) was used to evaluate the quality of care for eclamptic mothers admitted at Muhimbili National Hospital (MNH), Dar es Salaam, Tanzania after implementation of recommendations of a previous audit. Methods: A CBA of eclampsia cases was conducted at MNH. Management practices were evaluated using evidence-based criteria for appropriate care. The Ministry of Health (MOH) guidelines, local management guidelines, the WHO manual supplemented by the WHO Reproductive Health Library, standard textbooks, the Cochrane database and reviews in peer reviewed journals were adopted. At the initial audit in 2006, 389 case notes were assessed and compared with the standards, gaps were identified, recommendations made followed by implementation. A re-audit of 88 cases was conducted in 2009 and compared with the initial audit. Results: There was significant improvement in quality of patient management and outcome between the initial and re-audit: Review of management plan by senior staff (76% vs. 99%; P=0.001), urine for albumin test (61% vs. 99%; P=0.001), proper use of partogram to monitor labour (75% vs. 95%; P=0.003), treatment with steroids for lung maturity (2.0% vs. 24%; P=0.001), Caesarean section within 2 hours of decision (33% vs. 61%; P=0.005), full blood count (28% vs. 93%; P=0.001), serum urea and creatinine (44% vs. 86%; P=0.001), liver enzymes (4.0% vs. 86%; P=0.001), and specialist review within 2 hours of admission (25% vs. 39%; P=0.018). However, there was no significant change in terms of delivery within 24 hours of admission (69% vs. 63%; P=0.33). There was significant reduction of maternal deaths (7.7% vs. 0%; P=0.001). Conclusion: CBA is applicable in low resource setting and can help to improve quality of care in obstetrics including management of pre-eclampsia and eclampsia
    corecore