100 research outputs found

    Evolutionary computation in dynamic and uncertain environments

    Get PDF
    This book can be accessed from the link below - Copyright @ 2007 Springer-Verla

    Genetic algorithms with memory- and elitism-based immigrants in dynamic environments

    Get PDF
    Copyright @ 2008 by the Massachusetts Institute of TechnologyIn recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom under Grant EP/E060722/01

    A memetic algorithm with adaptive hill climbing strategy for dynamic optimization problems

    Get PDF
    Copyright @ Springer-Verlag 2008Dynamic optimization problems challenge traditional evolutionary algorithms seriously since they, once converged, cannot adapt quickly to environmental changes. This paper investigates the application of memetic algorithms, a class of hybrid evolutionary algorithms, for dynamic optimization problems. An adaptive hill climbing method is proposed as the local search technique in the framework of memetic algorithms, which combines the features of greedy crossover-based hill climbing and steepest mutation-based hill climbing. In order to address the convergence problem, two diversity maintaining methods, called adaptive dual mapping and triggered random immigrants, respectively, are also introduced into the proposed memetic algorithm for dynamic optimization problems. Based on a series of dynamic problems generated from several stationary benchmark problems, experiments are carried out to investigate the performance of the proposed memetic algorithm in comparison with some peer evolutionary algorithms. The experimental results show the efficiency of the proposed memetic algorithm in dynamic environments.This work was supported by the National Nature Science Foundation of China (NSFC) under Grant Nos. 70431003 and 70671020, the National Innovation Research Community Science Foundation of China under Grant No. 60521003, and the National Support Plan of China under Grant No. 2006BAH02A09 and the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/01

    Population-based incremental learning with memory scheme for changing environments

    Get PDF
    Copyright @ 2005 ACMIn recent years there has been a growing interest in studying evolutionary algorithms for dynamic optimization problems due to its importance in real world applications. Several approaches have been developed, such as the memory scheme. This paper investigates the application of the memory scheme for population-based incremental learning (PBIL) algorithms, a class of evolutionary algorithms, for dynamic optimization problems. A PBIL-specific memory scheme is proposed to improve its adaptability in dynamic environments. In this memory scheme the working probability vector is stored together with the best sample it creates in the memory and is used to reactivate old environments when change occurs. Experimental study based on a series of dynamic environments shows the efficiency of the memory scheme for PBILs in dynamic environments. In this paper, the relationship between the memory scheme and the multipopulation scheme for PBILs in dynamic environments is also investigated. The experimental results indicate a negative interaction of the multi-population scheme on the memory scheme for PBILs in the dynamic test environments

    How Crossover Speeds Up Building-Block Assembly in Genetic Algorithms

    Get PDF
    We re-investigate a fundamental question: how effective is crossover in Genetic Algorithms in combining building blocks of good solutions? Although this has been discussed controversially for decades, we are still lacking a rigorous and intuitive answer. We provide such answers for royal road functions and OneMax, where every bit is a building block. For the latter we show that using crossover makes every (\mu+\lambda) Genetic Algorithm at least twice as fast as the fastest evolutionary algorithm using only standard bit mutation, up to small-order terms and for moderate \mu and \lambda. Crossover is beneficial because it effectively turns fitness-neutral mutations into improvements by combining the right building blocks at a later stage. Compared to mutation-based evolutionary algorithms, this makes multi-bit mutations more useful. Introducing crossover changes the optimal mutation rate on OneMax from 1/n to (1+\sqrt{5})/2 \cdot 1/n \approx 1.618/n. This holds both for uniform crossover and k-point crossover. Experiments and statistical tests confirm that our findings apply to a broad class of building-block functions

    Modelling Genetic Algorithms and Evolving Populations

    No full text
    A formalism for modelling the dynamics of genetic algorithms using methods from statistical physics, originally due to Pr¨ugel-Bennett and Shapiro, is extended to ranking selection, a form of selection commonly used in the genetic algorithm community. The extension allows a reduction in the number of macroscopic variables required to model the mean behaviour of the genetic algorithm. This reduction allows a more qualitative understanding of the dynamics to be developed without sacrificing quantitative accuracy. The work is extended beyond modelling the dynamics of the genetic algorithm. A caricature of an optimisation problem with many local minima is considered — the basin with a barrier problem. The first passage time — the time required to escape the local minima to the global minimum — is calculated and insights gained as to how the genetic algorithm is searching the landscape. The interaction of the various genetic algorithm operators and how these interactions give rise to optimal parameters values is studied
    corecore