12 research outputs found

    An improvement of back propagation algorithm using halley third order optimisation method for classification problems

    Get PDF
    Back Propagation (BP) has proven to be a robust algorithm for different connectionist learning problems which commonly available for any functional induction that provides a computationally efficient method. This algorithm utilises first order optimisation method namely Gradient Descent (GD) method which attempts to minimise the error of network. Nevertheless, some major issues need to be considered. The GD method not performed well in large scale applications and when higher learning performances are required. Moreover, it has uncertainty in finding the global minimum of the error function. Besides, they generally depend on the parameters‟ selections. Thus, improving the BP learning efficiency has become an important area of research and consideration specifically in optimisation point of view. The variations of second order optimisation methods have been proposed which provide less iteration of convergence. Yet, an issue with these methods is occasionally converging to the undesired local minima. Inspired by the third order optimisation method which capable to solve unconstrained optimisation problems efficiently in the mathematical research area, this research endeavours to propose a new computational Halley method which is third order optimisation in improving the learning efficiency of BP algorithm namely Halley with Broyden-Fletcher-Goldfarb�Shanno (H-BFGS) and Halley with Davidon-Fletcher-Powell (H-DFP). The efficiency of the proposed methods is compared with the first and second order optimisation method by means of simulation on UCI Machine Learning Repository, Knowledge Extraction Evolutionary Learning and Kaggle dataset. The simulation results show that the highest improvement of H-BFGS in terms of generalisation accuracy is on the Voice Gender classification with 43.33% improvement for 60:40 data division. While H-DFP, the highest improvement achieved in generalisation accuracy is on the Seeds classification with 41.73% improvement for 70:30 data division. Thus, the proposed methods provide significant improvement and promising result in learning Artificial Neural Networks

    The effect of adaptive parameters on the performance of back propagation

    Get PDF
    The Back Propagation algorithm or its variation on Multilayered Feedforward Networks is widely used in many applications. However, this algorithm is well-known to have difficulties with local minima problem particularly caused by neuron saturation in the hidden layer. Most existing approaches modify the learning model in order to add a random factor to the model, which overcomes the tendency to sink into local minima. However, the random perturbations of the search direction and various kinds of stochastic adjustment to the current set of weights are not effective in enabling a network to escape from local minima which cause the network fail to converge to a global minimum within a reasonable number of iterations. Thus, this research proposed a new method known as Back Propagation Gradient Descent with Adaptive Gain, Adaptive Momentum and Adaptive Learning Rate (BPGD-AGAMAL) which modifies the existing Back Propagation Gradient Descent algorithm by adaptively changing the gain, momentum coefficient and learning rate. In this method, each training pattern has its own activation functions of neurons in the hidden layer. The activation functions are adjusted by the adaptation of gain parameters together with adaptive momentum and learning rate value during the learning process. The efficiency of the proposed algorithm is compared with conventional Back Propagation Gradient Descent and Back Propagation Gradient Descent with Adaptive Gain by means of simulation on six benchmark problems namely breast cancer, card, glass, iris, soybean, and thyroid. The results show that the proposed algorithm extensively improves the learning process of conventional Back Propagation algorithm

    BPGD-AG: A New Improvement Of Back-Propagation Neural Network Learning Algorithms With Adaptive Gain

    Get PDF
    The back propagation algorithm is one of the popular learning algorithms to train self learning feed forward neural networks. However, the convergence of this algorithm is slow mainly because the algorithm required the designers to arbitrarily select parameters such as network topology, initial weights and biases, learning rate value, the activation function, value for gain in activation function and momentum. An improper choice of theses parameters can result the training process comes to as standstill or get stuck at local minima. Previous research demonstrated that in a back propagation algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. In this paper, the influence of the variation of ‘gain’ on the learning ability of a back propagation neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. Instead of a constant ‘gain’ value, we propose an algorithm to change the gain value adaptively for each node. The efficiency of the proposed algorithm is verified by means of simulation on a function approximation problem using sequential as well as batch modes of training. The results show that the proposed algorithm significantly improves the learning speed of the general back-propagation algorithm

    Sistem pengambilan tanah (perbicaraan) land acquisition system (trial)

    Get PDF
    Sistem Pengambilan Tanah (Perbicaraan) adalah sebuah sistempengambilan tanah berdasarkan Akta Pengambilan Tanah 1960 yang memfokuskan pada proses perbicaraan di Terengganu. Sistem ini dibangunkan bagi menggantikan kaedah manual kepada kaedah bersistem menggunakan aplikasi berasaskan web. Tujuan sistem ini dibangunkan adalah untuk menambahbaik kelemahan kaedah manual yang digunakan. Kaedah manual yang digunakan dilihat kurang cekap kerana borang G dan H memakan masa yang lama untuk disiapkan dan maklumat pada borang pengambilan tanah tidak seragam antara daerah dengan daerah yang lain. Sistem ini menyediakan borang-borang yang perlu diisi oleh pengguna sebelum perbicaraan berlangsung. Pada hari perbicaraan, pengguna hanya perlu mengemaskini maklumat berdasarkan apa yang diperoleh dari hasil perbicaraan yang berlangsung seterusnya dapat mencetak borang G dan H untuk diedarkan kepada individuyang berkepentingan. Melalui sistem ini, proses edaran borang G dan H dapat dipercepatkan dan setiap maklumat pada borang-borang tersebut dapat diseragamkan. Model prototaip digunakan untuk membangunkan sistem ini. Manakala pembangunan sistem menggunakan perisian Apache sebagai pelayan web dan MySQL sebagai pangkalan data. Bahasa pengaturcaraan yang digunakan adalah Hypertext Preprocessor (PHP) dan Hypertext Mark-up Language (HTML). Umumnya, sistem ini dibangunkan bagi membantu kakitangan bahagian pengambilan tanah Pejabat Pengarah Tanah Dan Galian (PTG)Terengganu dan Pejabat Daerah Terengganu dengan meningkatkan produktiviti kerana dapat mempercepatkan sesuatu prose

    The Effect of Adaptive Gain and Adaptive Momentum in Improving Training Time of Gradient Descent Back Propagation Algorithm on Classification Problems

    Get PDF
    The back propagation algorithm has been successfully applied to wide range of practical problems. Since this algorithm uses a gradient descent method, it has some limitations which are slow learning convergence velocity and easy convergence to local minima. The convergence behaviour of the back propagation algorithm depends on the choice of initial weights and biases, network topology, learning rate, momentum, activation function and value for the gain in the activation function. Previous researchers demonstrated that in ‘feed forward’ algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. This research proposed an algorithm for improving the performance of the current working back propagation algorithm which is Gradien Descent Method with Adaptive Gain by changing the momentum coefficient adaptively for each node. The influence of the adaptive momentum together with adaptive gain on the learning ability of a neural network is analysed. Multilayer feed forward neural networks have been assessed. Physical interpretation of the relationship between the momentum value, the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and current Gradient Descent Method with Adaptive Gain was verified by means of simulation on three benchmark problems. In learning the patterns, the simulations result demonstrate that the proposed algorithm converged faster on Wisconsin breast cancer with an improvement ratio of nearly 1.8, 6.6 on Mushroom problem and 36% better on  Soybean data sets. The results clearly show that the proposed algorithm significantly improves the learning speed of the current gradient descent back-propagatin algorithm

    An Optimized Back Propagation Learning Algorithm with Adaptive Learning Rate

    Get PDF
    Back Propagation (BP) is commonly used algorithm that optimize the performance of network for training multilayer feed-forward artificial neural networks. However, BP is inherently slow in learning and it sometimes gets trapped at local minima. These problems occur mailnly due to a constant and non-optimum learning rate (a fixed step size) in which the fixed value of learning rate is set to an initial starting value before training patterns for an input layer and an output layer. This fixed learning rate often leads the BP network towrds failure during steepest descent. Therefore to overcome the limitations of BP, this paper introduces an improvement to back propagation gradient descent with adapative learning rate (BPGD-AL) by changing the values of learning rate locally during the learning process. The simulation results on selected benchmark datasets show that the adaptive learning rate significantly improves the learning efficiency of the Back Propagation Algorith

    An Improved Back Propagation Leaning Algorithm using Second Order Methods with Gain Parameter

    Get PDF
    Back Propagation (BP) algorithm is one of the oldest learning techniques used by artificial neural networks (ANN). It has successfully been implemented in various practical problems. However, the algorithm still faces some drawbacks such as getting easily stuck at local minima and needs longer time to converge on an acceptable solution. Recently, the introduction of Second Order Methods has shown a significant improvement on the learning in BP but it still has some drawbacks such as slow convergence and complexity. To overcome these limitations, this research proposed a modified approach for BP by introducing the Conjugate Gradient and Quasi-Newton which were Second Order methods together with ‘gain’ parameter. The performances of the proposed approach is evaluated in terms of lowest number of epochs, lowest CPU time and highest accuracy on five benchmark classification datasets such as Glass, Horse, 7Bit Parity, Indian Liver Patient and Lung Cancer. The results show that the proposed Second Order methods with ‘gain’ performed better than the BP algorithm
    corecore