13,605 research outputs found

    Adaptive particle swarm optimization

    Get PDF
    An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity

    A self-learning particle swarm optimizer for global optimization problems

    Get PDF
    Copyright @ 2011 IEEE. All Rights Reserved. This article was made available through the Brunel Open Access Publishing Fund.Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.This work was supported by the Engineering and Physical Sciences Research Council of U.K. under Grants EP/E060722/1 and EP/E060722/2

    Efficiency Analysis of Swarm Intelligence and Randomization Techniques

    Full text link
    Swarm intelligence has becoming a powerful technique in solving design and scheduling tasks. Metaheuristic algorithms are an integrated part of this paradigm, and particle swarm optimization is often viewed as an important landmark. The outstanding performance and efficiency of swarm-based algorithms inspired many new developments, though mathematical understanding of metaheuristics remains partly a mystery. In contrast to the classic deterministic algorithms, metaheuristics such as PSO always use some form of randomness, and such randomization now employs various techniques. This paper intends to review and analyze some of the convergence and efficiency associated with metaheuristics such as firefly algorithm, random walks, and L\'evy flights. We will discuss how these techniques are used and their implications for further research.Comment: 10 pages. arXiv admin note: substantial text overlap with arXiv:1212.0220, arXiv:1208.0527, arXiv:1003.146

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    Application of a new multi-agent Hybrid Co-evolution based Particle Swarm Optimisation methodology in ship design

    Get PDF
    In this paper, a multiple objective 'Hybrid Co-evolution based Particle Swarm Optimisation' methodology (HCPSO) is proposed. This methodology is able to handle multiple objective optimisation problems in the area of ship design, where the simultaneous optimisation of several conflicting objectives is considered. The proposed method is a hybrid technique that merges the features of co-evolution and Nash equilibrium with a ε-disturbance technique to eliminate the stagnation. The method also offers a way to identify an efficient set of Pareto (conflicting) designs and to select a preferred solution amongst these designs. The combination of co-evolution approach and Nash-optima contributes to HCPSO by utilising faster search and evolution characteristics. The design search is performed within a multi-agent design framework to facilitate distributed synchronous cooperation. The most widely used test functions from the formal literature of multiple objectives optimisation are utilised to test the HCPSO. In addition, a real case study, the internal subdivision problem of a ROPAX vessel, is provided to exemplify the applicability of the developed method

    information

    Get PDF
    In this study, an improved particle swarm optimization (PSO) algorithm, including 4 types of new velocity updating formulae (each is equal to the traditional PSO), was introduced. This algorithm was called the reverse direction supported particle swarm optimization (RDS-PSO) algorithm. The RDS-PSO algorithm has the potential to extend the diversity and generalization of traditional PSO by regulating the reverse direction information adaptively. To implement this extension, 2 new constants were added to the velocity update equation of the traditional PSO, and these constants were regulated through 2 alternative procedures, i.e. max min-based and cosine amplitude-based diversity-evaluating procedures. The 4 most commonly used benchmark functions were used to test the general optimization performances of the RDS-PSO algorithm with 3 different velocity updates, RDS-PSO without a regulating procedure, and the traditional PSO with linearly increasing/decreasing inertia weight. All PSO algorithms were also implemented in 4 modes, and their experimental results were compared. According to the experimental results, RDS-PSO 3 showed the best optimization performance
    corecore