1,929 research outputs found

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    A new neural network training algorithm based on artificial bee colony algorithm for nonlinear system identification

    Get PDF
    Artificial neural networks (ANNs), one of the most important artificial intelligence techniques, are used extensively in modeling many types of problems. A successful training process is required to create effective models with ANN. An effective training algorithm is essential for a successful training process. In this study, a new neural network training algorithm called the hybrid artificial bee colony algorithm based on effective scout bee stage (HABCES) was proposed. The HABCES algorithm includes four fundamental changes. Arithmetic crossover was used in the solution generation mechanisms of the employed bee and onlooker bee stages. The knowledge of the global best solution was utilized by arithmetic crossover. Again, this solution generation mechanism also has an adaptive step size. Limit is an important control parameter. In the standard ABC algorithm, it is constant throughout the optimization. In the HABCES algorithm, it was determined dynamically depending on the number of generations. Unlike the standard ABC algorithm, the HABCES algorithm used a solution generation mechanism based on the global best solution in the scout bee stage. Through these features, the HABCES algorithm has a strong local and global convergence ability. Firstly, the performance of the HABCES algorithm was analyzed on the solution of global optimization problems. Then, applications on the training of the ANN were carried out. ANN was trained using the HABCES algorithm for the identification of nonlinear static and dynamic systems. The performance of the HABCES algorithm was compared with the standard ABC, aABC and ABCES algorithms. The results showed that the performance of the HABCES algorithm was better in terms of solution quality and convergence speed. A performance increase of up to 69.57% was achieved by using the HABCES algorithm in the identification of static systems. This rate is 46.82% for the identification of dynamic systems

    Population-Based Optimization Algorithms for Solving the Travelling Salesman Problem

    Get PDF
    [Extract] Population based optimization algorithms are the techniques which are in the set of the nature based optimization algorithms. The creatures and natural systems which are working and developing in nature are one of the interesting and valuable sources of inspiration for designing and inventing new systems and algorithms in different fields of science and technology. Evolutionary Computation (Eiben& Smith, 2003), Neural Networks (Haykin, 99), Time Adaptive Self-Organizing Maps (Shah-Hosseini, 2006), Ant Systems (Dorigo & Stutzle, 2004), Particle Swarm Optimization (Eberhart & Kennedy, 1995), Simulated Annealing (Kirkpatrik, 1984), Bee Colony Optimization (Teodorovic et al., 2006) and DNA Computing (Adleman, 1994) are among the problem solving techniques inspired from observing nature. In this chapter population based optimization algorithms have been introduced. Some of these algorithms were mentioned above. Other algorithms are Intelligent Water Drops (IWD) algorithm (Shah-Hosseini, 2007), Artificial Immune Systems (AIS) (Dasgupta, 1999) and Electromagnetism-like Mechanisms (EM) (Birbil & Fang, 2003). In this chapter, every section briefly introduces one of these population based optimization algorithms and applies them for solving the TSP. Also, we try to note the important points of each algorithm and every point we contribute to these algorithms has been stated. Section nine shows experimental results based on the algorithms introduced in previous sections which are implemented to solve different problems of the TSP using well-known datasets
    corecore