1,097 research outputs found

    Convergence of RProp and variants

    Get PDF
    This paper examines conditions under which the Resilient Propagation-Rprop algorithm fails to converge, identifies limitations of the so-called Globally Convergent Rprop-GRprop algorithm which was previously thought to guarantee convergence, and considers pathological behaviour of the implementation of GRprop in the neuralnet software package. A new robust convergent backpropagation-ARCprop algorithm is presented. The new algorithm builds on Rprop, but guarantees convergence by shortening steps as necessary to achieve a sufficient reduction in global error. Simulation results on four benchmark problems from the PROBEN1 collection show that the new algorithm achieves similar levels of performance to Rprop in terms of training speed, training accuracy, and generalization

    ARTIFICIAL NEURAL NETWORK APPROACH FOR THE IDENTIFICATION OF CLOVE BUDS ORIGIN BASED ON METABOLITES COMPOSITION

    Get PDF
    This paper examines the use of an artificial neural network approach in identifying the origin of clove buds based on metabolites composition. Generally, large data sets are critical for an accurate identification. Machine learning with large data sets lead to a precise identification based on origins. However, clove buds uses small data sets due to the lack of metabolites composition and their high cost of extraction. The results show that backpropagation and resilient propagation with one and two hidden layers identifies the clove buds origin accurately. The backpropagation with one hidden layer offers 99.91% and 99.47% for training and testing data sets, respectively. The resilient propagation with two hidden layers offers 99.96% and 97.89% accuracy for training and testing data sets, respectively

    An Optimized Back Propagation Learning Algorithm with Adaptive Learning Rate

    Get PDF
    Back Propagation (BP) is commonly used algorithm that optimize the performance of network for training multilayer feed-forward artificial neural networks. However, BP is inherently slow in learning and it sometimes gets trapped at local minima. These problems occur mailnly due to a constant and non-optimum learning rate (a fixed step size) in which the fixed value of learning rate is set to an initial starting value before training patterns for an input layer and an output layer. This fixed learning rate often leads the BP network towrds failure during steepest descent. Therefore to overcome the limitations of BP, this paper introduces an improvement to back propagation gradient descent with adapative learning rate (BPGD-AL) by changing the values of learning rate locally during the learning process. The simulation results on selected benchmark datasets show that the adaptive learning rate significantly improves the learning efficiency of the Back Propagation Algorith

    An Advanced Conjugate Gradient Training Algorithm Based on a Modified Secant Equation

    Get PDF

    Coal-Fired Boiler Fault Prediction using Artificial Neural Networks

    Get PDF
    Boiler fault is a critical issue in a coal-fired power plant due to its high temperature and high pressure characteristics. The complexity of boiler design increases the difficulty of fault investigation in a quick moment to avoid long duration shut-down. In this paper, a boiler fault prediction model is proposed using artificial neural network. The key influential parameters analysis is carried out to identify its correlation with the performance of the boiler. The prediction model is developed to achieve the least misclassification rate and mean squared error. Artificial neural network is trained using a set of boiler operational parameters. Subsequenlty, the trained model is used to validate its prediction accuracy against actual fault value from a collected real plant data. With reference to the study and test results, two set of initial weights have been tested to verify the repeatability of the correct prediction. The results show that the artificial neural network implemented is able to provide an average of above 92% prediction rate of accuracy

    A neural network architecture for data editing in the Bank of ItalyÂ’s business surveys

    Get PDF
    This paper presents an application of neural network models to predictive classification for data quality control. Our aim is to identify data affected by measurement error in the Bank of ItalyÂ’s business surveys. We build an architecture consisting of three feed-forward networks for variables related to employment, sales and investment respectively: the networks are trained on input matrices extracted from the error-free final survey database for the 2003 wave, and subjected to stochastic transformations reproducing known error patterns. A binary indicator of unit perturbation is used as the output variable. The networks are trained with the Resilient Propagation learning algorithm. On the training and validation sets, correct predictions occur in about 90 per cent of the records for employment, 94 per cent for sales, and 75 per cent for investment. On independent test sets, the respective quotas average 92, 80 and 70 per cent. On our data, neural networks perform much better as classifiers than logistic regression, one of the most popular competing methods, on our data. They appear to provide a valid means of improving the efficiency of the quality control process and, ultimately, the reliability of survey data.data quality, data editing, binary classification, neural networks, measurement error

    neuralnet: Training of neural networks

    Get PDF
    Artificial neural networks are applied in many situations. neuralnet is built to train multi-layer perceptrons in the context of regression analyses, i.e. to approximate functional relationships between covariates and response variables. Thus, neural networks are used as extensions of generalized linear models. neuralnet is a very flexible package. The backpropagation algorithm and three versions of resilient backpropagation are implemented and it provides a custom-choice of activation and error function. An arbitrary number of covariates and response variables as well as of hidden layers can theoretically be included
    corecore