551 research outputs found

    A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments

    Get PDF
    This article is posted here with permission from the IEEE - Copyright @ 2010 IEEEIn the real world, many optimization problems are dynamic. This requires an optimization algorithm to not only find the global optimal solution under a specific environment but also to track the trajectory of the changing optima over dynamic environments. To address this requirement, this paper investigates a clustering particle swarm optimizer (PSO) for dynamic optimization problems. This algorithm employs a hierarchical clustering method to locate and track multiple peaks. A fast local search method is also introduced to search optimal solutions in a promising subregion found by the clustering method. Experimental study is conducted based on the moving peaks benchmark to test the performance of the clustering PSO in comparison with several state-of-the-art algorithms from the literature. The experimental results show the efficiency of the clustering PSO for locating and tracking multiple optima in dynamic environments in comparison with other particle swarm optimization models based on the multiswarm method.This work was supported by the Engineering and Physical Sciences Research Council of U.K., under Grant EP/E060722/1

    A general framework of multi-population methods with clustering in undetectable dynamic environments

    Get PDF
    Copyright @ 2011 IEEETo solve dynamic optimization problems, multiple population methods are used to enhance the population diversity for an algorithm with the aim of maintaining multiple populations in different sub-areas in the fitness landscape. Many experimental studies have shown that locating and tracking multiple relatively good optima rather than a single global optimum is an effective idea in dynamic environments. However, several challenges need to be addressed when multi-population methods are applied, e.g., how to create multiple populations, how to maintain them in different sub-areas, and how to deal with the situation where changes can not be detected or predicted. To address these issues, this paper investigates a hierarchical clustering method to locate and track multiple optima for dynamic optimization problems. To deal with undetectable dynamic environments, this paper applies the random immigrants method without change detection based on a mechanism that can automatically reduce redundant individuals in the search space throughout the run. These methods are implemented into several research areas, including particle swarm optimization, genetic algorithm, and differential evolution. An experimental study is conducted based on the moving peaks benchmark to test the performance with several other algorithms from the literature. The experimental results show the efficiency of the clustering method for locating and tracking multiple optima in comparison with other algorithms based on multi-population methods on the moving peaks benchmark

    Step-Optimized Particle Swarm Optimization

    Get PDF
    Particle swarm optimization (PSO) is widely used in industrial and academic research to solve optimization problems. Recent developments of PSO show a direction towards adaptive PSO (APSO). APSO changes its behaviour during the optimization process based on information gathered at each iteration. It has been shown that APSO is able to solve a wide range of difficult optimization problems efficiently and effectively. In classical PSO, all parameters are fixed for the entire swarm. In particular, all particles share the same settings of their velocity weights. We propose four APSO variants in which every particle has its own velocity weights. We use PSO to optimize the settings of the velocity weights of every particle at every iteration, thereby creating a step-optimized PSO (SOPSO). We implement four known PSO variants (global best PSO, decreasing weight PSO, time-varying acceleration coefficients PSO, and guaranteed convergence PSO) and four proposed APSO variants (SOPSO, moving bounds SOPSO, repulsive SOPSO, and moving bound repulsive SOPSO) in a PSO software package. The PSO software package is used to compare the performance of the PSO and APSO variants on 22 benchmark problems. Test results show that the proposed APSO variants outperform the known PSO variants on difficult optimization problems that require large numbers of function evaluations for their solution. This suggests that the SOPSO strategy of optimizing the settings of the velocity weights of every particle improves the robustness and performance of PSO

    A self-learning particle swarm optimizer for global optimization problems

    Get PDF
    Copyright @ 2011 IEEE. All Rights Reserved. This article was made available through the Brunel Open Access Publishing Fund.Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.This work was supported by the Engineering and Physical Sciences Research Council of U.K. under Grants EP/E060722/1 and EP/E060722/2

    Simple and Adaptive Particle Swarms

    Get PDF
    The substantial advances that have been made to both the theoretical and practical aspects of particle swarm optimization over the past 10 years have taken it far beyond its original intent as a biological swarm simulation. This thesis details and explains these advances in the context of what has been achieved to this point, as well as what has yet to be understood or solidified within the research community. Taking into account the state of the modern field, a standardized PSO algorithm is defined for benchmarking and comparative purposes both within the work, and for the community as a whole. This standard is refined and simplified over several iterations into a form that does away with potentially undesirable properties of the standard algorithm while retaining equivalent or superior performance on the common set of benchmarks. This refinement, referred to as a discrete recombinant swarm (PSODRS) requires only a single user-defined parameter in the positional update equation, and uses minimal additive stochasticity, rather than the multiplicative stochasticity inherent in the standard PSO. After a mathematical analysis of the PSO-DRS algorithm, an adaptive framework is developed and rigorously tested, demonstrating the effects of the tunable particle- and swarm-level parameters. This adaptability shows practical benefit by broadening the range of problems which the PSO-DRS algorithm is wellsuited to optimize

    Improving a Particle Swarm Optimization-based Clustering Method

    Get PDF
    This thesis discusses clustering related works with emphasis on Particle Swarm Optimization (PSO) principles. Specifically, we review in detail the PSO clustering algorithm proposed by Van Der Merwe & Engelbrecht, the particle swarm clustering (PSC) algorithm proposed by Cohen & de Castro, Szabo’s modified PSC (mPSC), and Georgieva & Engelbrecht’s Cooperative-Multi-Population PSO (CMPSO). In this thesis, an improvement over Van Der Merwe & Engelbrecht’s PSO clustering has been proposed and tested for standard datasets. The improvements observed in those experiments vary from slight to moderate, both in terms of minimizing the cost function, and in terms of run time
    corecore