3,015 research outputs found

    Adaptive particle swarm optimization

    Get PDF
    An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity

    Free Search and Particle Swarm Optimisation applied to Non-constrained Test

    Get PDF
    This article presents an evaluation of Particle Swarm Optimisation (PSO) with variable inertia weight and Free Search (FS) with variable neighbour space applied to nonconstrained numerical test. The objectives are to assess how high convergence speed reflects on adaptation to various test problems and to identify possible balance between convergence speed and adaptation, which allows the algorithms to complete successfully the process of search on heterogeneous tasks with limited computational resources within a reasonable finite time and with acceptable for engineering purposes precision. Modification strategies of both algorithms are compared in terms of their ability for search space exploration. Five numerical tests are explored. Achieved experimental results are presented and analysed

    information

    Get PDF
    In this study, an improved particle swarm optimization (PSO) algorithm, including 4 types of new velocity updating formulae (each is equal to the traditional PSO), was introduced. This algorithm was called the reverse direction supported particle swarm optimization (RDS-PSO) algorithm. The RDS-PSO algorithm has the potential to extend the diversity and generalization of traditional PSO by regulating the reverse direction information adaptively. To implement this extension, 2 new constants were added to the velocity update equation of the traditional PSO, and these constants were regulated through 2 alternative procedures, i.e. max min-based and cosine amplitude-based diversity-evaluating procedures. The 4 most commonly used benchmark functions were used to test the general optimization performances of the RDS-PSO algorithm with 3 different velocity updates, RDS-PSO without a regulating procedure, and the traditional PSO with linearly increasing/decreasing inertia weight. All PSO algorithms were also implemented in 4 modes, and their experimental results were compared. According to the experimental results, RDS-PSO 3 showed the best optimization performance

    Application of Particle Swarm Optimization to Formative E-Assessment in Project Management

    Get PDF
    The current paper describes the application of Particle Swarm Optimization algorithm to the formative e-assessment problem in project management. The proposed approach resolves the issue of personalization, by taking into account, when selecting the item tests in an e-assessment, the following elements: the ability level of the user, the targeted difficulty of the test and the learning objectives, represented by project management concepts which have to be checked. The e-assessment tool in which the Particle Swarm Optimization algorithm is integrated is also presented. Experimental results and comparison with other algorithms used in item tests selection prove the suitability of the proposed approach to the formative e-assessment domain. The study is presented in the framework of other evolutionary and genetic algorithms applied in e-education.Particle Swarm Optimization, Genetic Algorithms, Evolutionary Algorithms, Formative E-assessment, E-education

    Feedback learning particle swarm optimization

    Get PDF
    This is the authorā€™s version of a work that was accepted for publication in Applied Soft Computing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published and is available at the link below - Copyright @ Elsevier 2011In this paper, a feedback learning particle swarm optimization algorithm with quadratic inertia weight (FLPSO-QIW) is developed to solve optimization problems. The proposed FLPSO-QIW consists of four steps. Firstly, the inertia weight is calculated by a designed quadratic function instead of conventional linearly decreasing function. Secondly, acceleration coefficients are determined not only by the generation number but also by the search environment described by each particleā€™s history best fitness information. Thirdly, the feedback fitness information of each particle is used to automatically design the learning probabilities. Fourthly, an elite stochastic learning (ELS) method is used to refine the solution. The FLPSO-QIW has been comprehensively evaluated on 18 unimodal, multimodal and composite benchmark functions with or without rotation. Compared with various state-of-the-art PSO algorithms, the performance of FLPSO-QIW is promising and competitive. The effects of parameter adaptation, parameter sensitivity and proposed mechanism are discussed in detail.This research was partially supported by the National Natural Science Foundation of PR China (Grant No 60874113), the Research Fund for the Doctoral Program of Higher Education (Grant No 200802550007), the Key Creative Project of Shanghai Education Community (Grant No 09ZZ66), the Key Foundation Project of Shanghai(Grant No 09JC1400700), the International Science and Technology Cooperation Project of China under Grant 2009DFA32050, and the Alexander von Humboldt Foundation of Germany

    Fuzzy Adaptive Tuning of a Particle Swarm Optimization Algorithm for Variable-Strength Combinatorial Test Suite Generation

    Full text link
    Combinatorial interaction testing is an important software testing technique that has seen lots of recent interest. It can reduce the number of test cases needed by considering interactions between combinations of input parameters. Empirical evidence shows that it effectively detects faults, in particular, for highly configurable software systems. In real-world software testing, the input variables may vary in how strongly they interact, variable strength combinatorial interaction testing (VS-CIT) can exploit this for higher effectiveness. The generation of variable strength test suites is a non-deterministic polynomial-time (NP) hard computational problem \cite{BestounKamalFuzzy2017}. Research has shown that stochastic population-based algorithms such as particle swarm optimization (PSO) can be efficient compared to alternatives for VS-CIT problems. Nevertheless, they require detailed control for the exploitation and exploration trade-off to avoid premature convergence (i.e. being trapped in local optima) as well as to enhance the solution diversity. Here, we present a new variant of PSO based on Mamdani fuzzy inference system \cite{Camastra2015,TSAKIRIDIS2017257,KHOSRAVANIAN2016280}, to permit adaptive selection of its global and local search operations. We detail the design of this combined algorithm and evaluate it through experiments on multiple synthetic and benchmark problems. We conclude that fuzzy adaptive selection of global and local search operations is, at least, feasible as it performs only second-best to a discrete variant of PSO, called DPSO. Concerning obtaining the best mean test suite size, the fuzzy adaptation even outperforms DPSO occasionally. We discuss the reasons behind this performance and outline relevant areas of future work.Comment: 21 page

    Biogeography-based learning particle swarm optimization

    Get PDF

    Compound particle swarm optimization in dynamic environments

    Get PDF
    Copyright @ Springer-Verlag Berlin Heidelberg 2008.Adaptation to dynamic optimization problems is currently receiving a growing interest as one of the most important applications of evolutionary algorithms. In this paper, a compound particle swarm optimization (CPSO) is proposed as a new variant of particle swarm optimization to enhance its performance in dynamic environments. Within CPSO, compound particles are constructed as a novel type of particles in the search space and their motions are integrated into the swarm. A special reflection scheme is introduced in order to explore the search space more comprehensively. Furthermore, some information preserving and anti-convergence strategies are also developed to improve the performance of CPSO in a new environment. An experimental study shows the efficiency of CPSO in dynamic environments.This work was supported by the Key Program of the National Natural Science Foundation (NNSF) of China under Grant No. 70431003 and Grant No. 70671020, the Science Fund for Creative Research Group of NNSF of China under Grant No. 60521003, the National Science and Technology Support Plan of China under Grant No. 2006BAH02A09 and the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant No. EP/E060722/1
    • ā€¦
    corecore