102,381 research outputs found
Genetic learning particle swarm optimization
Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for âlearning.â This leads to a generalized âlearning PSOâ paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO
Dual population-based incremental learning for problem optimization in dynamic environments
Copyright @ 2003 Asia Pacific Symposium on Intelligent and Evolutionary SystemsIn recent years there is a growing interest in the research of evolutionary algorithms for dynamic optimization problems since real world problems are usually dynamic, which presents serious challenges to traditional evolutionary algorithms. In this paper, we investigate the application of Population-Based Incremental Learning (PBIL) algorithms, a class of evolutionary algorithms, for problem optimization under dynamic environments. Inspired by the complementarity mechanism in nature, we propose a Dual PBIL that operates on two probability vectors that are dual to each other with respect to the central point in the search space. Using a dynamic problem generating technique we generate a series of dynamic knapsack problems from a randomly generated stationary knapsack problem and carry out experimental study comparing the performance of investigated PBILs and one traditional genetic algorithm. Experimental results show that the introduction of dualism into PBIL improves its adaptability under dynamic environments, especially when the environment is subject to significant changes in the sense of genotype space
A comparative study of immune system based genetic algorithms in dynamic environments
Copyright @ 2006 ACMDiversity and memory are two major mechanisms used in biology to keep the adaptability of organisms in the ever-changing environment in nature. These mechanisms can be integrated into genetic algorithms to enhance their performance for problem optimization in dynamic environments. This paper investigates several GAs inspired by the ideas of biological immune system and transformation schemes for dynamic optimization problems. An aligned transformation operator is proposed and combined to the immune system based genetic algorithm to deal with dynamic environments. Using a series of systematically constructed dynamic test problems, experiments are carried out to compare several immune system based genetic algorithms, including the proposed one, and two standard genetic algorithms enhanced with memory and random immigrants respectively. The experimental results validate the efficiency of the proposed aligned transformation and corresponding immune system based genetic algorithm in dynamic environments
Paired Comparisons-based Interactive Differential Evolution
We propose Interactive Differential Evolution (IDE) based on paired
comparisons for reducing user fatigue and evaluate its convergence speed in
comparison with Interactive Genetic Algorithms (IGA) and tournament IGA. User
interface and convergence performance are two big keys for reducing Interactive
Evolutionary Computation (IEC) user fatigue. Unlike IGA and conventional IDE,
users of the proposed IDE and tournament IGA do not need to compare whole
individuals each other but compare pairs of individuals, which largely
decreases user fatigue. In this paper, we design a pseudo-IEC user and evaluate
another factor, IEC convergence performance, using IEC simulators and show that
our proposed IDE converges significantly faster than IGA and tournament IGA,
i.e. our proposed one is superior to others from both user interface and
convergence performance points of view
Ergonomic Chair Design by Fusing Qualitative and Quantitative Criteria using Interactive Genetic Algorithms
This paper emphasizes the necessity of formally bringing qualitative and
quantitative criteria of ergonomic design together, and provides a novel
complementary design framework with this aim. Within this framework, different
design criteria are viewed as optimization objectives; and design solutions are
iteratively improved through the cooperative efforts of computer and user. The
framework is rooted in multi-objective optimization, genetic algorithms and
interactive user evaluation. Three different algorithms based on the framework
are developed, and tested with an ergonomic chair design problem. The parallel
and multi-objective approaches show promising results in fitness convergence,
design diversity and user satisfaction metrics
Recommended from our members
A novel two-archive strategy for evolutionary many-objective optimization algorithm based on reference points
Current evolutionary many-objective optimization algorithms face two challenges: one is to ensure population diversity for searching the entire solution space. The other is to ensure quick convergence to the optimal solution set. In this paper, we propose a novel two-archive strategy for evolutionary many-objective optimization algorithm. The uniform archive strategy, based on reference points, is used to keep population diversity in the evolutionary process, and to ensure that an evolutionary algorithm is able to search the entire solution space. The single elite archive strategy is used to ensure that individuals with the best single objective value are able to evolve into the next generation and have more opportunities to generate offspring. This strategy aims to improve the convergence rate. Then this novel two-archive strategy is applied to improving the Non-dominated Sorting Genetic Algorithm (NSGA-III). Simulation experiments are conducted on benchmark test sets and experimental results show that our proposed algorithm with the two-archive strategy has a better performance than other state-of-art algorithms
- âŠ