29 research outputs found

    A Comparison of PSO and Backpropagation for Training RBF Neural Networks for Identification of a Power System with STATCOM

    Get PDF
    Backpropagation algorithm is the most commonly used algorithm for training artificial neural networks. While being a straightforward procedure, it suffers from extensive computations, relatively slow convergence speed and possible divergence for certain conditions. The efficiency of this method as the training algorithm of a radial basis function neural network (RBFN) is compared with that of particle swarm optimization, for neural network based identification of a small power system with a static compensator. The comparison of the two methods is based on the convergence speed and robustness of each method

    Enhancement of quantum particle swarm optimization in elman recurrent network with bounded VMAX function

    Get PDF
    There are many drawbacks in BP network, such as trap into local minima and may get stuck at regions of a search space. To solve these problems, Particle Swarm Optimization (PSO) has been executed to improve ANN performance. In this study, we exploit errors optimization of Elman Recurrent Neural Network (ERNN) with a new enhance method of Particle Swarm Optimization with an addition of quantum approach to optimize the performance of both networks with bounded Vmax function. Main characteristics of Vmax function are to control the global exploration of particles in Particle Swarm Optimization and Quantum approach is used to improve the searching ability of the individual particle of PSO. The results show that for cancer dataset, Quantum Particle Swarm Optimization in Elman Recurrent Neural Network (QPSOERN) with bounded Vmax of hyperbolic tangent depicted 96.26 and Vmax sigmoid function with 96.35 which both furnishes promising outcomes and better value in terms of classification accuracy and convergence rate compared to bounded standard Vmax function with only 90.98

    DISCRETE PARTICLE SWARM OPTIMIZATION FOR THE ORIENTEERING PROBLEM

    Get PDF
    Discrete particle swarm optimization (DPSO) is gaining popularity in the area of combinatorial optimization in the recent past due to its simplicity in coding and consistency in performance.  A DPSO algorithm has been developed for orienteering problem (OP) which has been shown to have many practical applications.  It uses reduced variable neighborhood search as a local search tool.  The DPSO algorithm was compared with ten heuristic models from the literature using benchmark problems.  The results show that the DPSO algorithm is a robust algorithm that can optimally solve the well known OP test problems

    Learning enhancement of radial basis function network with particle swarm optimization

    Get PDF
    Back propagation (BP) algorithm is the most common technique in Artificial Neural Network (ANN) learning, and this includes Radial Basis Function Network. However, major disadvantages of BP are its convergence rate is relatively slow and always being trapped at the local minima. To overcome this problem, Particle Swarm Optimization (PSO) has been implemented to enhance ANN learning to increase the performance of network in terms of convergence rate and accuracy. In Back Propagation Radial Basis Function Network (BP-RBFN), there are many elements to be considered. These include the number of input nodes, hidden nodes, output nodes, learning rate, bias, minimum error and activation/transfer functions. These elements will affect the speed of RBF Network learning. In this study, Particle Swarm Optimization (PSO) is incorporated into RBF Network to enhance the learning performance of the network. Two algorithms have been developed on error optimization for Back Propagation of Radial Basis Function Network (BP-RBFN) and Particle Swarm Optimization of Radial Basis Function Network (PSO-RBFN) to seek and generate better network performance. The results show that PSO-RBFN give promising outputs with faster convergence rate and better classifications compared to BP-RBFN

    A Comparison of PSO and Backpropagation for Training RBF Neural Networks for Identification of a Power System with

    Get PDF
    ABSTRACT Particle Swarm Optimization (PSO) can be a solution to this problem. It is a population based stochastic optimization technique developed by J. Kennedy and R. Eberhart in 1995. It models the cognitive as well as the social behavior of a flock of birds (solutions) which are flying over an area (solution space) in search of food (optimal solution) Backpropagation algorithm is the most commonly used algorithm for training artificial neural networks. While being a straightforward procedure, it suffers from extensive computations, relatively slow convergence speed and possible divergence for certain conditions. The efficiency of this method as the training algorithm of a Radial Basis Function Neural Network (RBFN) is compared with that of Particle Swarm Optimization, for neural network based identification of a small power system with a Static Compensator. The comparison of the two methods is based on the convergence speed and robustness of each method. PSO has been applied to improve neural networks in various aspects, such as network connection weights, network architecture and learning algorithms. In recent years, there have been several papers reporting on the replacement of the backpropagation algorithm by PSO for some neural network structures [5]-[7]. This paper investigates the efficiency of PSO and BP in terms of convergence speed and the robustness for training a Radial Basis Function Neural Network (RBFN) on a power system identification problem

    Optimization of ANN Structure Using Adaptive PSO & GA and Performance Analysis Based on Boolean Identities

    Get PDF
    In this paper, a novel heuristic structure optimization technique is proposed for Neural Network using Adaptive PSO & GA on Boolean identities to improve the performance of Artificial Neural Network (ANN). The selection of the optimal number of hidden layers and nodes has a significant impact on the performance of a neural network, is decided in an adhoc manner. The optimization of architecture and weights of neural network is a complex task. In this regard the use of evolutionary techniques based on Adaptive Particle Swarm Optimization (APSO) & Adaptive Genetic Algorithm (AGA) is used for selecting an optimal number of hidden layers and nodes of the neural controller, for better performance and low training errors through Boolean identities. The hidden nodes are adapted through the generation until they reach the optimal number. The Boolean operators such as AND, OR, XOR have been used for performance analysis of this technique

    Job Scheduling with Genetic Algorithm

    Get PDF
    In this paper, we have used a Genetic Algorithm (GA) approach for providing a solution to the Job Scheduling Problem (JSP) of placing 5000 jobs on 806 machines. The GA starts off with a randomly generated population of 100 chromosomes, each of which represents a random placement of jobs on machines. The population then goes through the process of reproduction, crossover and mutation to create a new population for the next generation until a predefined number of generations are reached. Since the performance of a GA depends on the parameters like population size, crossover rate and mutation rate, a series of experiments were conducted in order to identify the best parameter combination to achieve good solutions to the JSP by balancing makespan with the running time. We found that a crossover rate of 0.3, a mutation rate of 0.15 and a population size of 100 yield the best results
    corecore