2,769 research outputs found

    A memetic algorithm with adaptive hill climbing strategy for dynamic optimization problems

    Get PDF
    Copyright @ Springer-Verlag 2008Dynamic optimization problems challenge traditional evolutionary algorithms seriously since they, once converged, cannot adapt quickly to environmental changes. This paper investigates the application of memetic algorithms, a class of hybrid evolutionary algorithms, for dynamic optimization problems. An adaptive hill climbing method is proposed as the local search technique in the framework of memetic algorithms, which combines the features of greedy crossover-based hill climbing and steepest mutation-based hill climbing. In order to address the convergence problem, two diversity maintaining methods, called adaptive dual mapping and triggered random immigrants, respectively, are also introduced into the proposed memetic algorithm for dynamic optimization problems. Based on a series of dynamic problems generated from several stationary benchmark problems, experiments are carried out to investigate the performance of the proposed memetic algorithm in comparison with some peer evolutionary algorithms. The experimental results show the efficiency of the proposed memetic algorithm in dynamic environments.This work was supported by the National Nature Science Foundation of China (NSFC) under Grant Nos. 70431003 and 70671020, the National Innovation Research Community Science Foundation of China under Grant No. 60521003, and the National Support Plan of China under Grant No. 2006BAH02A09 and the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/01

    PDGA: The primal-dual genetic algorithm

    Get PDF
    Copyright @ 2003 IOS PressGenetic algorithms (GAs) are a class of search algorithms based on principles of natural evolution. Hence, incorporating mechanisms used in nature may improve the performance of GAs. In this paper inspired by the mechanisms of complementarity and dominance that broadly exist in nature, we present a new genetic algorithm — Primal-Dual Genetic Algorithm (PDGA). PDGA operates on a pair of chromosomes that are primal-dual to each other through the primal-dual mapping, which maps one to the other with a maximum distance away in a given distance space in genotype. The primal-dual mapping improves the exploration capacity of PDGA and thus its searching efficiency in the search space. To test the performance of PDGA, experiments were carried out to compare PDGA over traditional simple GA (SGA) and a peer GA, called Dual Genetic Algorithm (DGA), over a typical set of test problems. The experimental results demonstrate that PDGA outperforms both SGA and DGA on the test set. The results show that PDGA is a good candidate genetic algorithm

    Comparing and Combining Lexicase Selection and Novelty Search

    Full text link
    Lexicase selection and novelty search, two parent selection methods used in evolutionary computation, emphasize exploring widely in the search space more than traditional methods such as tournament selection. However, lexicase selection is not explicitly driven to select for novelty in the population, and novelty search suffers from lack of direction toward a goal, especially in unconstrained, highly-dimensional spaces. We combine the strengths of lexicase selection and novelty search by creating a novelty score for each test case, and adding those novelty scores to the normal error values used in lexicase selection. We use this new novelty-lexicase selection to solve automatic program synthesis problems, and find it significantly outperforms both novelty search and lexicase selection. Additionally, we find that novelty search has very little success in the problem domain of program synthesis. We explore the effects of each of these methods on population diversity and long-term problem solving performance, and give evidence to support the hypothesis that novelty-lexicase selection resists converging to local optima better than lexicase selection

    Adaptive primal-dual genetic algorithms in dynamic environments

    Get PDF
    This article is placed here with permission of IEEE - Copyright @ 2010 IEEERecently, there has been an increasing interest in applying genetic algorithms (GAs) in dynamic environments. Inspired by the complementary and dominance mechanisms in nature, a primal-dual GA (PDGA) has been proposed for dynamic optimization problems (DOPs). In this paper, an important operator in PDGA, i.e., the primal-dual mapping (PDM) scheme, is further investigated to improve the robustness and adaptability of PDGA in dynamic environments. In the improved scheme, two different probability-based PDM operators, where the mapping probability of each allele in the chromosome string is calculated through the statistical information of the distribution of alleles in the corresponding gene locus over the population, are effectively combined according to an adaptive Lamarckian learning mechanism. In addition, an adaptive dominant replacement scheme, which can probabilistically accept inferior chromosomes, is also introduced into the proposed algorithm to enhance the diversity level of the population. Experimental results on a series of dynamic problems generated from several stationary benchmark problems show that the proposed algorithm is a good optimizer for DOPs.This work was supported in part by the National Nature Science Foundation of China (NSFC) under Grant 70431003 and Grant 70671020, by the National Innovation Research Community Science Foundation of China under Grant 60521003, by the National Support Plan of China under Grant 2006BAH02A09, by the Engineering and Physical Sciences Research Council (EPSRC) of U.K. under Grant EP/E060722/1, and by the Hong Kong Polytechnic University Research Grants under Grant G-YH60

    A multi-agent based evolutionary algorithm in non-stationary environments

    Get PDF
    This article is posted here with permission of IEEE - Copyright @ 2008 IEEEIn this paper, a multi-agent based evolutionary algorithm (MAEA) is introduced to solve dynamic optimization problems. The agents simulate living organism features and co-evolve to find optimum. All agents live in a lattice like environment, where each agent is fixed on a lattice point. In order to increase the energy, agents can compete with their neighbors and can also acquire knowledge based on statistic information. In order to maintain the diversity of the population, the random immigrants and adaptive primal dual mapping schemes are used. Simulation experiments on a set of dynamic benchmark problems show that MAEA can obtain a better performance in non-stationary environments in comparison with several peer genetic algorithms.This work was suported by the Key Program of National Natural Science Foundation of China under Grant No. 70431003, the Science Fund for Creative Research Group of the National Natural Science Foundation of China under Grant No. 60521003, the National Science and Technology Support Plan of China under Grant No. 2006BAH02A09, and the Engineering and Physical Sciences Research Council of the United Kingdom under Grant No. EP/E060722/1

    Experimental study on population-based incremental learning algorithms for dynamic optimization problems

    Get PDF
    Copyright @ Springer-Verlag 2005.Evolutionary algorithms have been widely used for stationary optimization problems. However, the environments of real world problems are often dynamic. This seriously challenges traditional evolutionary algorithms. In this paper, the application of population-based incremental learning (PBIL) algorithms, a class of evolutionary algorithms, for dynamic problems is investigated. Inspired by the complementarity mechanism in nature a Dual PBIL is proposed, which operates on two probability vectors that are dual to each other with respect to the central point in the genotype space. A diversity maintaining technique of combining the central probability vector into PBIL is also proposed to improve PBILs adaptability in dynamic environments. In this paper, a new dynamic problem generator that can create required dynamics from any binary-encoded stationary problem is also formalized. Using this generator, a series of dynamic problems were systematically constructed from several benchmark stationary problems and an experimental study was carried out to compare the performance of several PBIL algorithms and two variants of standard genetic algorithm. Based on the experimental results, we carried out algorithm performance analysis regarding the weakness and strength of studied PBIL algorithms and identified several potential improvements to PBIL for dynamic optimization problems.This work was was supported by UK EPSRC under Grant GR/S79718/01
    corecore