1,832 research outputs found

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Adaptive dynamic disturbance strategy for differential evolution algorithm

    Get PDF
    To overcome the problems of slow convergence speed, premature convergence leading to local optimization and parameter constraints when solving high-dimensional multi-modal optimization problems, an adaptive dynamic disturbance strategy for differential evolution algorithm (ADDSDE) is proposed. Firstly, this entails using the chaos mapping strategy to initialize the population to increase population diversity, and secondly, a new weighted mutation operator is designed to weigh and combinemutation strategies of the standard differential evolution (DE). The scaling factor and crossover probability are adaptively adjusted to dynamically balance the global search ability and local exploration ability. Finally, a Gauss perturbation operator is introduced to generate a random disturbance variation, and to accelerate premature individuals to jump out of local optimization. The algorithm runs independently on five benchmark functions 20 times, and the results show that the ADDSDE algorithm has better global optimization search ability, faster convergence speed and higher accuracy and stability compared with other optimization algorithms, which provide assistance insolving high-dimensionaland complex problems in engineering and information science

    A Novel Evolutionary Algorithm for Capacitor Placement in Distribution Systems

    Get PDF
    This paper uses an effective method, CODEQ method, with integer programming for solving the capacitor placement problems in distribution systems. Different from the original differential evolution (DE), the concepts of chaotic search, opposition-based learning, and quantum mechanics are used in the CODEQ method to overcome the drawback of selection of the crossover factor and scaling factor used in the original DE method. One benchmark function and one 9-bus system from the literature are used to compare the performance of the CODEQ method with the DE, and simulated annealing (SA). Numerical results show that the performance of the CODEQ method is better than the other methods. Also, the CODEQ method used in 9-bus system is superior to some other methods in terms of solution power loss and costs

    A Hybrid Chimp Optimization Algorithm and Generalized Normal Distribution Algorithm with Opposition-Based Learning Strategy for Solving Data Clustering Problems

    Full text link
    This paper is concerned with data clustering to separate clusters based on the connectivity principle for categorizing similar and dissimilar data into different groups. Although classical clustering algorithms such as K-means are efficient techniques, they often trap in local optima and have a slow convergence rate in solving high-dimensional problems. To address these issues, many successful meta-heuristic optimization algorithms and intelligence-based methods have been introduced to attain the optimal solution in a reasonable time. They are designed to escape from a local optimum problem by allowing flexible movements or random behaviors. In this study, we attempt to conceptualize a powerful approach using the three main components: Chimp Optimization Algorithm (ChOA), Generalized Normal Distribution Algorithm (GNDA), and Opposition-Based Learning (OBL) method. Firstly, two versions of ChOA with two different independent groups' strategies and seven chaotic maps, entitled ChOA(I) and ChOA(II), are presented to achieve the best possible result for data clustering purposes. Secondly, a novel combination of ChOA and GNDA algorithms with the OBL strategy is devised to solve the major shortcomings of the original algorithms. Lastly, the proposed ChOAGNDA method is a Selective Opposition (SO) algorithm based on ChOA and GNDA, which can be used to tackle large and complex real-world optimization problems, particularly data clustering applications. The results are evaluated against seven popular meta-heuristic optimization algorithms and eight recent state-of-the-art clustering techniques. Experimental results illustrate that the proposed work significantly outperforms other existing methods in terms of the achievement in minimizing the Sum of Intra-Cluster Distances (SICD), obtaining the lowest Error Rate (ER), accelerating the convergence speed, and finding the optimal cluster centers.Comment: 48 pages, 14 Tables, 12 Figure

    Chaos embedded opposition based learning for gravitational search algorithm

    Full text link
    Due to its robust search mechanism, Gravitational search algorithm (GSA) has achieved lots of popularity from different research communities. However, stagnation reduces its searchability towards global optima for rigid and complex multi-modal problems. This paper proposes a GSA variant that incorporates chaos-embedded opposition-based learning into the basic GSA for the stagnation-free search. Additionally, a sine-cosine based chaotic gravitational constant is introduced to balance the trade-off between exploration and exploitation capabilities more effectively. The proposed variant is tested over 23 classical benchmark problems, 15 test problems of CEC 2015 test suite, and 15 test problems of CEC 2014 test suite. Different graphical, as well as empirical analyses, reveal the superiority of the proposed algorithm over conventional meta-heuristics and most recent GSA variants.Comment: 33 pages, 5 Figure
    • …
    corecore