2,620 research outputs found

    Adaptive particle swarm optimization

    Get PDF
    An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    An adaptive learning particle swarm optimizer for function optimization

    Get PDF
    This article is posted here with permission of the IEEE - Copyright @ 2009 IEEETraditional particle swarm optimization (PSO) suffers from the premature convergence problem, which usually results in PSO being trapped in local optima. This paper presents an adaptive learning PSO (ALPSO) based on a variant PSO learning strategy. In ALPSO, the learning mechanism of each particle is separated into three parts: its own historical best position, the closest neighbor and the global best one. By using this individual level adaptive technique, a particle can well guide its behavior of exploration and exploitation. A set of 21 test functions were used including un-rotated, rotated and composition functions to test the performance of ALPSO. From the comparison results over several variant PSO algorithms, ALPSO shows an outstanding performance on most test functions, especially the fast convergence characteristic.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom under Grant EP/E060722/1

    Feedback learning particle swarm optimization

    Get PDF
    This is the author’s version of a work that was accepted for publication in Applied Soft Computing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published and is available at the link below - Copyright @ Elsevier 2011In this paper, a feedback learning particle swarm optimization algorithm with quadratic inertia weight (FLPSO-QIW) is developed to solve optimization problems. The proposed FLPSO-QIW consists of four steps. Firstly, the inertia weight is calculated by a designed quadratic function instead of conventional linearly decreasing function. Secondly, acceleration coefficients are determined not only by the generation number but also by the search environment described by each particle’s history best fitness information. Thirdly, the feedback fitness information of each particle is used to automatically design the learning probabilities. Fourthly, an elite stochastic learning (ELS) method is used to refine the solution. The FLPSO-QIW has been comprehensively evaluated on 18 unimodal, multimodal and composite benchmark functions with or without rotation. Compared with various state-of-the-art PSO algorithms, the performance of FLPSO-QIW is promising and competitive. The effects of parameter adaptation, parameter sensitivity and proposed mechanism are discussed in detail.This research was partially supported by the National Natural Science Foundation of PR China (Grant No 60874113), the Research Fund for the Doctoral Program of Higher Education (Grant No 200802550007), the Key Creative Project of Shanghai Education Community (Grant No 09ZZ66), the Key Foundation Project of Shanghai(Grant No 09JC1400700), the International Science and Technology Cooperation Project of China under Grant 2009DFA32050, and the Alexander von Humboldt Foundation of Germany

    Particle swarm variants : standardized convergence analysis

    Get PDF
    This paper presents an objective function specially designed for the convergence analysis of a number of particle swarm optimization (PSO) variants. It was found that using a specially designed objective function for convergence analysis is both a simple and valid method for performing assumption free convergence analysis. It was also found that the canonical particle swarm's topology did not have an impact on the parameter region needed to ensure convergence. The parameter region needed to ensure convergent particle behavior was empirically obtained for the fully informed PSO, the bare bones PSO, and the standard PSO 2011 algorithm. In the case of the bare bones PSO and the standard PSO 2011 the region needed to ensure convergent particle behavior di ers from previous theoretical work. The di erence in the obtained regions in the bare bones PSO is a direct result of the previous theoretical work relying on simplifying assumptions, speci - cally the stagnation assumption. A number of possible causes for the discrepancy in the obtained convergent region for the standard PSO 2011 are given.http://link.springer.com/journal/117212016-09-30hb201

    A self-learning particle swarm optimizer for global optimization problems

    Get PDF
    Copyright @ 2011 IEEE. All Rights Reserved. This article was made available through the Brunel Open Access Publishing Fund.Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.This work was supported by the Engineering and Physical Sciences Research Council of U.K. under Grants EP/E060722/1 and EP/E060722/2

    Parameters identification of unknown delayed genetic regulatory networks by a switching particle swarm optimization algorithm

    Get PDF
    The official published version can be found at the link below.This paper presents a novel particle swarm optimization (PSO) algorithm based on Markov chains and competitive penalized method. Such an algorithm is developed to solve global optimization problems with applications in identifying unknown parameters of a class of genetic regulatory networks (GRNs). By using an evolutionary factor, a new switching PSO (SPSO) algorithm is first proposed and analyzed, where the velocity updating equation jumps from one mode to another according to a Markov chain, and acceleration coefficients are dependent on mode switching. Furthermore, a leader competitive penalized multi-learning approach (LCPMLA) is introduced to improve the global search ability and refine the convergent solutions. The LCPMLA can automatically choose search strategy using a learning and penalizing mechanism. The presented SPSO algorithm is compared with some well-known PSO algorithms in the experiments. It is shown that the SPSO algorithm has faster local convergence speed, higher accuracy and algorithm reliability, resulting in better balance between the global and local searching of the algorithm, and thus generating good performance. Finally, we utilize the presented SPSO algorithm to identify not only the unknown parameters but also the coupling topology and time-delay of a class of GRNs.This research was partially supported by the National Natural Science Foundation of PR China (Grant No. 60874113), the Research Fund for the Doctoral Program of Higher Education (Grant No. 200802550007), the Key Creative Project of Shanghai Education Community (Grant No. 09ZZ66), the Key Foundation Project of Shanghai (Grant No. 09JC1400700), the Engineering and Physical Sciences Research Council EPSRC of the UK under Grant No. GR/S27658/01, the International Science and Technology Cooperation Project of China under Grant No. 2009DFA32050, an International Joint Project sponsored by the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany
    corecore