5,708 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    A framework of quantum-inspired multi-objective evolutionary algorithms and its convergence properties

    Get PDF
    In this paper, a general framework of quantum-inspired multiobjective evolutionary algorithms is proposed based on the basic principles of quantum computing and general schemes of multi-objective evolutionary algorithms. One of the sufficient convergence conditions to Pareto optimal set is presented and it is proved under partially order set theory. Moreover, two algorithms are given as examples meeting this convergence condition, in which two improved Q-gates are used. Their convergence properties are discussed. Additionally, one counterexample is also given

    A particle swarm optimization based memetic algorithm for dynamic optimization problems

    Get PDF
    Copyright @ Springer Science + Business Media B.V. 2010.Recently, there has been an increasing concern from the evolutionary computation community on dynamic optimization problems since many real-world optimization problems are dynamic. This paper investigates a particle swarm optimization (PSO) based memetic algorithm that hybridizes PSO with a local search technique for dynamic optimization problems. Within the framework of the proposed algorithm, a local version of PSO with a ring-shape topology structure is used as the global search operator and a fuzzy cognition local search method is proposed as the local search technique. In addition, a self-organized random immigrants scheme is extended into our proposed algorithm in order to further enhance its exploration capacity for new peaks in the search space. Experimental study over the moving peaks benchmark problem shows that the proposed PSO-based memetic algorithm is robust and adaptable in dynamic environments.This work was supported by the National Nature Science Foundation of China (NSFC) under Grant No. 70431003 and Grant No. 70671020, the National Innovation Research Community Science Foundation of China under Grant No. 60521003, the National Support Plan of China under Grant No. 2006BAH02A09 and the Ministry of Education, science, and Technology in Korea through the Second-Phase of Brain Korea 21 Project in 2009, the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/01 and the Hong Kong Polytechnic University Research Grants under Grant G-YH60
    corecore