23,365 research outputs found

    Adaptive particle swarm optimization

    Get PDF
    An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity

    Orthogonal learning particle swarm optimization

    Get PDF
    Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood’s best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSOL algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness

    A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments

    Get PDF
    This article is posted here with permission from the IEEE - Copyright @ 2010 IEEEIn the real world, many optimization problems are dynamic. This requires an optimization algorithm to not only find the global optimal solution under a specific environment but also to track the trajectory of the changing optima over dynamic environments. To address this requirement, this paper investigates a clustering particle swarm optimizer (PSO) for dynamic optimization problems. This algorithm employs a hierarchical clustering method to locate and track multiple peaks. A fast local search method is also introduced to search optimal solutions in a promising subregion found by the clustering method. Experimental study is conducted based on the moving peaks benchmark to test the performance of the clustering PSO in comparison with several state-of-the-art algorithms from the literature. The experimental results show the efficiency of the clustering PSO for locating and tracking multiple optima in dynamic environments in comparison with other particle swarm optimization models based on the multiswarm method.This work was supported by the Engineering and Physical Sciences Research Council of U.K., under Grant EP/E060722/1

    A particle swarm optimization based memetic algorithm for dynamic optimization problems

    Get PDF
    Copyright @ Springer Science + Business Media B.V. 2010.Recently, there has been an increasing concern from the evolutionary computation community on dynamic optimization problems since many real-world optimization problems are dynamic. This paper investigates a particle swarm optimization (PSO) based memetic algorithm that hybridizes PSO with a local search technique for dynamic optimization problems. Within the framework of the proposed algorithm, a local version of PSO with a ring-shape topology structure is used as the global search operator and a fuzzy cognition local search method is proposed as the local search technique. In addition, a self-organized random immigrants scheme is extended into our proposed algorithm in order to further enhance its exploration capacity for new peaks in the search space. Experimental study over the moving peaks benchmark problem shows that the proposed PSO-based memetic algorithm is robust and adaptable in dynamic environments.This work was supported by the National Nature Science Foundation of China (NSFC) under Grant No. 70431003 and Grant No. 70671020, the National Innovation Research Community Science Foundation of China under Grant No. 60521003, the National Support Plan of China under Grant No. 2006BAH02A09 and the Ministry of Education, science, and Technology in Korea through the Second-Phase of Brain Korea 21 Project in 2009, the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/01 and the Hong Kong Polytechnic University Research Grants under Grant G-YH60

    Application of a new multi-agent Hybrid Co-evolution based Particle Swarm Optimisation methodology in ship design

    Get PDF
    In this paper, a multiple objective 'Hybrid Co-evolution based Particle Swarm Optimisation' methodology (HCPSO) is proposed. This methodology is able to handle multiple objective optimisation problems in the area of ship design, where the simultaneous optimisation of several conflicting objectives is considered. The proposed method is a hybrid technique that merges the features of co-evolution and Nash equilibrium with a ε-disturbance technique to eliminate the stagnation. The method also offers a way to identify an efficient set of Pareto (conflicting) designs and to select a preferred solution amongst these designs. The combination of co-evolution approach and Nash-optima contributes to HCPSO by utilising faster search and evolution characteristics. The design search is performed within a multi-agent design framework to facilitate distributed synchronous cooperation. The most widely used test functions from the formal literature of multiple objectives optimisation are utilised to test the HCPSO. In addition, a real case study, the internal subdivision problem of a ROPAX vessel, is provided to exemplify the applicability of the developed method
    corecore