344 research outputs found

    Intelligent Leukaemia Diagnosis with Bare-Bones PSO based Feature Optimization

    Get PDF
    In this research, we propose an intelligent decision support system for acute lymphoblastic leukaemia (ALL) diagnosis using microscopic images. Two Bare-bones Particle Swarm Optimization (BBPSO) algorithms are proposed to identify the most significant discriminative characteristics of healthy and blast cells to enable efficient ALL classification. The first BBPSO variant incorporates accelerated chaotic search mechanisms of food chasing and enemy avoidance to diversify the search and mitigate the premature convergence of the original BBPSO algorithm. The second BBPSO variant exhibits both of the abovementioned new search mechanisms in a subswarm-based search. Evaluated with the ALL-IDB2 database, both proposed algorithms achieve superior geometric mean performances of 94.94% and 96.25%, respectively, and outperform other metaheuristic search and related methods significantly for ALL classification

    AN IMPROVED BARE-BONES PARTICLE SWARM ALGORITHM FOR MULTI-OBJECTIVE OPTIMIZATION WITH APPLICATION TO THE ENGINEERING STRUCTURES

    Get PDF
    In this paper, an improved bare-bones multi-objective particle swarm algorithm is proposed to solve the multi-objective size optimization problems with non-linearity and constraints in structural design and optimization. Firstly, the development of particle individual guide and the randomness of gravity factor are increased by modifying the updated form of particle position. Then, the combination of spatial grid density and congestion distance ranking is used to maintain the external archive, which is divided into two parts: feasible solution set and infeasible solution set. Next, the global best positions are determined by increasing the probability allocation strategy which varies with time. The algorithmic complexity is given and the performance of solution ability, convergence and constraint processing are analyzed through standard test functions and compared with other algorithms. Next, as a case study, a support frame of triangle track wheel is optimized by the BB-MOPSO and improved BB-MOPSO. The results show that the improved algorithm improves the cross-region exploration, optimal solution distribution and convergence of the bare-bones particle swarm optimization algorithm, which can effectively solve the multi-objective size optimization problem with non-linearity and constraints

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Random drift particle swarm optimization algorithm: convergence analysis and parameter selection

    Get PDF

    Differential evolution with an evolution path: a DEEP evolutionary algorithm

    Get PDF
    Utilizing cumulative correlation information already existing in an evolutionary process, this paper proposes a predictive approach to the reproduction mechanism of new individuals for differential evolution (DE) algorithms. DE uses a distributed model (DM) to generate new individuals, which is relatively explorative, whilst evolution strategy (ES) uses a centralized model (CM) to generate offspring, which through adaptation retains a convergence momentum. This paper adopts a key feature in the CM of a covariance matrix adaptation ES, the cumulatively learned evolution path (EP), to formulate a new evolutionary algorithm (EA) framework, termed DEEP, standing for DE with an EP. Without mechanistically combining two CM and DM based algorithms together, the DEEP framework offers advantages of both a DM and a CM and hence substantially enhances performance. Under this architecture, a self-adaptation mechanism can be built inherently in a DEEP algorithm, easing the task of predetermining algorithm control parameters. Two DEEP variants are developed and illustrated in the paper. Experiments on the CEC'13 test suites and two practical problems demonstrate that the DEEP algorithms offer promising results, compared with the original DEs and other relevant state-of-the-art EAs

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO
    corecore