8,108 research outputs found

    Multi-Guide Particle Swarm Optimization for Large-Scale Multi-Objective Optimization Problems

    Get PDF
    Multi-guide particle swarm optimization (MGPSO) is a novel metaheuristic for multi-objective optimization based on particle swarm optimization (PSO). MGPSO has been shown to be competitive when compared with other state-of-the-art multi-objective optimization algorithms for low-dimensional problems. However, to the best of the author’s knowledge, the suitability of MGPSO for high-dimensional multi-objective optimization problems has not been studied. One goal of this thesis is to provide a scalability study of MGPSO in order to evaluate its efficacy for high-dimensional multi-objective optimization problems. It is observed that while MGPSO has comparable performance to state-of-the-art multi-objective optimization algorithms, it experiences a performance drop with the increase in the problem dimensionality. Therefore, a main contribution of this work is a new scalable MGPSO-based algorithm, termed cooperative co-evolutionary multi-guide particle swarm optimization (CCMGPSO), that incorporates ideas from cooperative PSOs. A detailed empirical study on well-known benchmark problems comparing the proposed improved approach with various state-of-the-art multi-objective optimization algorithms is done. Results show that the proposed CCMGPSO is highly competitive for high-dimensional problems

    Resolution of the stochastic strategy spatial prisoner's dilemma by means of particle swarm optimization

    Get PDF
    We study the evolution of cooperation among selfish individuals in the stochastic strategy spatial prisoner's dilemma game. We equip players with the particle swarm optimization technique, and find that it may lead to highly cooperative states even if the temptations to defect are strong. The concept of particle swarm optimization was originally introduced within a simple model of social dynamics that can describe the formation of a swarm, i.e., analogous to a swarm of bees searching for a food source. Essentially, particle swarm optimization foresees changes in the velocity profile of each player, such that the best locations are targeted and eventually occupied. In our case, each player keeps track of the highest payoff attained within a local topological neighborhood and its individual highest payoff. Thus, players make use of their own memory that keeps score of the most profitable strategy in previous actions, as well as use of the knowledge gained by the swarm as a whole, to find the best available strategy for themselves and the society. Following extensive simulations of this setup, we find a significant increase in the level of cooperation for a wide range of parameters, and also a full resolution of the prisoner's dilemma. We also demonstrate extreme efficiency of the optimization algorithm when dealing with environments that strongly favor the proliferation of defection, which in turn suggests that swarming could be an important phenomenon by means of which cooperation can be sustained even under highly unfavorable conditions. We thus present an alternative way of understanding the evolution of cooperative behavior and its ubiquitous presence in nature, and we hope that this study will be inspirational for future efforts aimed in this direction.Comment: 12 pages, 4 figures; accepted for publication in PLoS ON

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    corecore