162,373 research outputs found

    Bidirectional Learning in Recurrent Neural Networks Using Equilibrium Propagation

    Get PDF
    Neurobiologically-plausible learning algorithms for recurrent neural networks that can perform supervised learning are a neglected area of study. Equilibrium propagation is a recent synthesis of several ideas in biological and artificial neural network research that uses a continuous-time, energy-based neural model with a local learning rule. However, despite dealing with recurrent networks, equilibrium propagation has only been applied to discriminative categorization tasks. This thesis generalizes equilibrium propagation to bidirectional learning with asymmetric weights. Simultaneously learning the discriminative as well as generative transformations for a set of data points and their corresponding category labels, bidirectional equilibrium propagation utilizes recurrence and weight asymmetry to share related but non-identical representations within the network. Experiments on an artificial dataset demonstrate the ability to learn both transformations, as well as the ability for asymmetric-weight networks to generalize their discriminative training to the untrained generative task

    Extreme Learning Machine for land cover classification

    Full text link
    This paper explores the potential of extreme learning machine based supervised classification algorithm for land cover classification. In comparison to a backpropagation neural network, which requires setting of several user-defined parameters and may produce local minima, extreme learning machine require setting of one parameter and produce a unique solution. ETM+ multispectral data set (England) was used to judge the suitability of extreme learning machine for remote sensing classifications. A back propagation neural network was used to compare its performance in term of classification accuracy and computational cost. Results suggest that the extreme learning machine perform equally well to back propagation neural network in term of classification accuracy with this data set. The computational cost using extreme learning machine is very small in comparison to back propagation neural network.Comment: 6 pages, mapindia 2008 conferenc

    Learning enhancement of radial basis function network with particle swarm optimization

    Get PDF
    Back propagation (BP) algorithm is the most common technique in Artificial Neural Network (ANN) learning, and this includes Radial Basis Function Network. However, major disadvantages of BP are its convergence rate is relatively slow and always being trapped at the local minima. To overcome this problem, Particle Swarm Optimization (PSO) has been implemented to enhance ANN learning to increase the performance of network in terms of convergence rate and accuracy. In Back Propagation Radial Basis Function Network (BP-RBFN), there are many elements to be considered. These include the number of input nodes, hidden nodes, output nodes, learning rate, bias, minimum error and activation/transfer functions. These elements will affect the speed of RBF Network learning. In this study, Particle Swarm Optimization (PSO) is incorporated into RBF Network to enhance the learning performance of the network. Two algorithms have been developed on error optimization for Back Propagation of Radial Basis Function Network (BP-RBFN) and Particle Swarm Optimization of Radial Basis Function Network (PSO-RBFN) to seek and generate better network performance. The results show that PSO-RBFN give promising outputs with faster convergence rate and better classifications compared to BP-RBFN

    WCBP: A new water cycle based back propagation algorithm for data classification

    Get PDF
    Water Cycle algorithm is a modern nature inspired meta-heuristic algorithm to provide derivative-free solution to optimize complex problems. The back-propagation neural network (BPNN) algorithm performs well on many complex data types but it possess the problem of network stagnancy and local minima. Therefore, this paper proposed the use of WC algorithm in combination with Back-Propagation neural network (BPNN) algorithm to solve the local minima problem in gradient descent trajectory. The performance of the proposed Water Cycle based Back-Propagation (WCBP) algorithm is compared with the conventional BPNN, ABC-BP and ABC-LM algorithms on selected benchmark classification problems from UCI Machine Learning Repository. The simulation results show that the BPNN training process is highly enhanced when combined with WC algorithm
    corecore