16 research outputs found

    BPGD-AG: A New Improvement Of Back-Propagation Neural Network Learning Algorithms With Adaptive Gain

    Get PDF
    The back propagation algorithm is one of the popular learning algorithms to train self learning feed forward neural networks. However, the convergence of this algorithm is slow mainly because the algorithm required the designers to arbitrarily select parameters such as network topology, initial weights and biases, learning rate value, the activation function, value for gain in activation function and momentum. An improper choice of theses parameters can result the training process comes to as standstill or get stuck at local minima. Previous research demonstrated that in a back propagation algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. In this paper, the influence of the variation of ‘gain’ on the learning ability of a back propagation neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. Instead of a constant ‘gain’ value, we propose an algorithm to change the gain value adaptively for each node. The efficiency of the proposed algorithm is verified by means of simulation on a function approximation problem using sequential as well as batch modes of training. The results show that the proposed algorithm significantly improves the learning speed of the general back-propagation algorithm

    Optimal Parameter Selection Using Three-term Back Propagation Algorithm for Data Classification

    Get PDF
    The back propagation (BP) algorithm is the most popular supervised learning method for multi-layered feed forward Neural Network. It has been successfully deployed in numerous practical problems and disciplines. Regardless of its popularity, BP is still known for some major drawbacks such as easily getting stuck in local minima and slow convergence; since, it uses Gradient Descent (GD) method to learn the network. Over the years, many improved modifications of the BP learning algorithm have been made by researchers but the local minima problem remains unresolved. Therefore, to resolve the inherent problems of BP algorithm, this paper proposed BPGD-A3T algorithm where the approach introduces three adaptive parameters which are gain, momentum and learning rate in BP. The performance of the proposed BPGD-A3T algorithm is then compared with BPGD two term parameters (BPGD-2T), BP with adaptive gain (BPGD-AG) and conventional BP algorithm (BPGD) by means of simulations on classification datasets. The simulation results show that the proposed BPGD-A3T shows better performance and performed highest accuracy for all dataset as compared to other

    The Effect of Adaptive Gain and Adaptive Momentum in Improving Training Time of Gradient Descent Back Propagation Algorithm on Classification Problems

    Get PDF
    The back propagation algorithm has been successfully applied to wide range of practical problems. Since this algorithm uses a gradient descent method, it has some limitations which are slow learning convergence velocity and easy convergence to local minima. The convergence behaviour of the back propagation algorithm depends on the choice of initial weights and biases, network topology, learning rate, momentum, activation function and value for the gain in the activation function. Previous researchers demonstrated that in ‘feed forward’ algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. This research proposed an algorithm for improving the performance of the current working back propagation algorithm which is Gradien Descent Method with Adaptive Gain by changing the momentum coefficient adaptively for each node. The influence of the adaptive momentum together with adaptive gain on the learning ability of a neural network is analysed. Multilayer feed forward neural networks have been assessed. Physical interpretation of the relationship between the momentum value, the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and current Gradient Descent Method with Adaptive Gain was verified by means of simulation on three benchmark problems. In learning the patterns, the simulations result demonstrate that the proposed algorithm converged faster on Wisconsin breast cancer with an improvement ratio of nearly 1.8, 6.6 on Mushroom problem and 36% better on  Soybean data sets. The results clearly show that the proposed algorithm significantly improves the learning speed of the current gradient descent back-propagatin algorithm

    Second Order Learning Algorithm for Back Propagation Neural Networks

    Get PDF
    Training of artificial neural networks (ANN) is normally a time consuming task due to iteratively search imposed by implicit nonlinearity of the network behavior.  In this work an improvement to ‘batch-mode’ offline training methods, gradient based or gradient free is proposed. The new procedure computes and improves the search direction along the negative gradient by introducing the ‘gain’ value of the activation functions and calculating the negative gradient on error with respect to the weights as well as ‘gain’ values in minimizing the error function. The main advantage of this new procedure is that it is easy to implement into other faster optimization algorithms such as conjugate gradient method and Quasi-Newton method. The pperformance of the proposed method implemented into conjugate gradient method and Quasi-Newton method is demonstrated by comparing the simulation results to the neural network toolbox for the chosen benchmark. The results show that the proposed method considerably improves the convergence rate significantly faster the learning process of the general back propagation algorithm because of it new efficient search direction

    Hybrid system prediction for the stock market: The case of transitional markets

    Full text link

    Research on Fault Diagnosis Method Based on Rule Base Neural Network

    Get PDF
    The relationship between fault phenomenon and fault cause is always nonlinear, which influences the accuracy of fault location. And neural network is effective in dealing with nonlinear problem. In order to improve the efficiency of uncertain fault diagnosis based on neural network, a neural network fault diagnosis method based on rule base is put forward. At first, the structure of BP neural network is built and the learning rule is given. Then, the rule base is built by fuzzy theory. An improved fuzzy neural construction model is designed, in which the calculated methods of node function and membership function are also given. Simulation results confirm the effectiveness of this method

    Damage of reinforced concrete beams consisting modified artificial polyethylene aggregate (MAPEA) under low impact load

    Get PDF
    The impact damage of reinforced concrete beams subjected to low velocity impact loading at the ultimate load range are explored. In this study, an impact tests is carried out on reinforced concrete beam consisting Modified Artificial Polyethylene Aggregate (MAPEA), where, an approximately 100 kg of impact weight were dropped three times onto the beam specimens until its fails. The waste plastic bags, that encapsulated by glass powder as known as MAPEA were used as the replacement of coarse aggregate. There are twelve beam specimens of size 120 mm x 150 mm x 800 mm are categorized into three groups, where each group consists of 4 specimens. The three groups denoted as normal reinforced concrete (NRC), reinforced concrete with MAPEA concrete block infill (RCAI) and reinforced concrete with 9% of MAPEA as a coarse aggregate (RC9A). All specimens were tested under low velocity impact loads under 0.32 m and 1.54 m (2.5 m/s & 5.5 m/s velocities) drop height of impact weight. The comparisons were made between the three types of beams under the aspect of failure (shear and flexural) and its final displacement. The result of the laboratory test showed that the RC9A beams produced less crack and low value of residual displacement

    PROPOSED METHODOLOGY FOR OPTIMIZING THE TRAINING PARAMETERS OF A MULTILAYER FEED-FORWARD ARTIFICIAL NEURAL NETWORKS USING A GENETIC ALGORITHM

    Get PDF
    An artificial neural network (ANN), or shortly "neural network" (NN), is a powerful mathematical or computational model that is inspired by the structure and/or functional characteristics of biological neural networks. Despite the fact that ANN has been developing rapidly for many years, there are still some challenges concerning the development of an ANN model that performs effectively for the problem at hand. ANN can be categorized into three main types: single layer, recurrent network and multilayer feed-forward network. In multilayer feed-forward ANN, the actual performance is highly dependent on the selection of architecture and training parameters. However, a systematic method for optimizing these parameters is still an active research area. This work focuses on multilayer feed-forward ANNs due to their generalization capability, simplicity from the viewpoint of structure, and ease of mathematical analysis. Even though, several rules for the optimization of multilayer feed-forward ANN parameters are available in the literature, most networks are still calibrated via a trial-and-error procedure, which depends mainly on the type of problem, and past experience and intuition of the expert. To overcome these limitations, there have been attempts to use genetic algorithm (GA) to optimize some of these parameters. However most, if not all, of the existing approaches are focused partially on the part of architecture and training parameters. On the contrary, the GAANN approach presented here has covered most aspects of multilayer feed-forward ANN in a more comprehensive way. This research focuses on the use of binaryencoded genetic algorithm (GA) to implement efficient search strategies for the optimal architecture and training parameters of a multilayer feed-forward ANN. Particularly, GA is utilized to determine the optimal number of hidden layers, number of neurons in each hidden layer, type of training algorithm, type of activation function of hidden and output neurons, initial weight, learning rate, momentum term, and epoch size of a multilayer feed-forward ANN. In this thesis, the approach has been analyzed and algorithms that simulate the new approach have been mapped out
    corecore