218 research outputs found

    Uncertainty And Evolutionary Optimization: A Novel Approach

    Full text link
    Evolutionary algorithms (EA) have been widely accepted as efficient solvers for complex real world optimization problems, including engineering optimization. However, real world optimization problems often involve uncertain environment including noisy and/or dynamic environments, which pose major challenges to EA-based optimization. The presence of noise interferes with the evaluation and the selection process of EA, and thus adversely affects its performance. In addition, as presence of noise poses challenges to the evaluation of the fitness function, it may need to be estimated instead of being evaluated. Several existing approaches attempt to address this problem, such as introduction of diversity (hyper mutation, random immigrants, special operators) or incorporation of memory of the past (diploidy, case based memory). However, these approaches fail to adequately address the problem. In this paper we propose a Distributed Population Switching Evolutionary Algorithm (DPSEA) method that addresses optimization of functions with noisy fitness using a distributed population switching architecture, to simulate a distributed self-adaptive memory of the solution space. Local regression is used in the pseudo-populations to estimate the fitness. Successful applications to benchmark test problems ascertain the proposed method's superior performance in terms of both robustness and accuracy.Comment: In Proceedings of the The 9th IEEE Conference on Industrial Electronics and Applications (ICIEA 2014), IEEE Press, pp. 988-983, 201

    Compound particle swarm optimization in dynamic environments

    Get PDF
    Copyright @ Springer-Verlag Berlin Heidelberg 2008.Adaptation to dynamic optimization problems is currently receiving a growing interest as one of the most important applications of evolutionary algorithms. In this paper, a compound particle swarm optimization (CPSO) is proposed as a new variant of particle swarm optimization to enhance its performance in dynamic environments. Within CPSO, compound particles are constructed as a novel type of particles in the search space and their motions are integrated into the swarm. A special reflection scheme is introduced in order to explore the search space more comprehensively. Furthermore, some information preserving and anti-convergence strategies are also developed to improve the performance of CPSO in a new environment. An experimental study shows the efficiency of CPSO in dynamic environments.This work was supported by the Key Program of the National Natural Science Foundation (NNSF) of China under Grant No. 70431003 and Grant No. 70671020, the Science Fund for Creative Research Group of NNSF of China under Grant No. 60521003, the National Science and Technology Support Plan of China under Grant No. 2006BAH02A09 and the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant No. EP/E060722/1

    Learning in abstract memory schemes for dynamic optimization

    Get PDF
    We investigate an abstraction based memory scheme for evolutionary algorithms in dynamic environments. In this scheme, the abstraction of good solutions (i.e., their approximate location in the search space) is stored in the memory instead of good solutions themselves and is employed to improve future problem solving. In particular, this paper shows how learning takes place in the abstract memory scheme and how the performance in problem solving changes over time for different kinds of dynamics in the fitness landscape. The experiments show that the abstract memory enables learning processes and efficiently improves the performance of evolutionary algorithms in dynamic environments

    On the performance of a hybrid genetic algorithm in dynamic environments

    Get PDF
    The ability to track the optimum of dynamic environments is important in many practical applications. In this paper, the capability of a hybrid genetic algorithm (HGA) to track the optimum in some dynamic environments is investigated for different functional dimensions, update frequencies, and displacement strengths in different types of dynamic environments. Experimental results are reported by using the HGA and some other existing evolutionary algorithms in the literature. The results show that the HGA has better capability to track the dynamic optimum than some other existing algorithms.Comment: This paper has been submitted to Applied Mathematics and Computation on May 22, 2012 Revised version has been submitted to Applied Mathematics and Computation on March 1, 201

    Memory based on abstraction for dynamic fitness functions

    Get PDF
    Copyright @ Springer-Verlag Berlin Heidelberg 2008.This paper proposes a memory scheme based on abstraction for evolutionary algorithms to address dynamic optimization problems. In this memory scheme, the memory does not store good solutions as themselves but as their abstraction, i.e., their approximate location in the search space. When the environment changes, the stored abstraction information is extracted to generate new individuals into the population. Experiments are carried out to validate the abstraction based memory scheme. The results show the efficiency of the abstraction based memory scheme for evolutionary algorithms in dynamic environments.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant No. EP/E060722/1

    Ant colony optimization with immigrants schemes in dynamic environments

    Get PDF
    This is the post-print version of this article. The official published version can be accessed from the link below - Copyright @ 2010 Springer-VerlagIn recent years, there has been a growing interest in addressing dynamic optimization problems (DOPs) using evolutionary algorithms (EAs). Several approaches have been developed for EAs to increase the diversity of the population and enhance the performance of the algorithm for DOPs. Among these approaches, immigrants schemes have been found beneficial for EAs for DOPs. In this paper, random, elitismbased, and hybrid immigrants schemes are applied to ant colony optimization (ACO) for the dynamic travelling salesman problem (DTSP). The experimental results show that random immigrants are beneficial for ACO in fast changing environments, whereas elitism-based immigrants are beneficial for ACO in slowly changing environments. The ACO algorithm with hybrid immigrants scheme combines the merits of the random and elitism-based immigrants schemes. Moreover, the results show that the proposed algorithms outperform compared approaches in almost all dynamic test cases and that immigrant schemes efficiently improve the performance of ACO algorithms in DTSP.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/1

    Memory-based immigrants for ant colony optimization in changing environments

    Get PDF
    Copyright @ 2011 SpringerAnt colony optimization (ACO) algorithms have proved that they can adapt to dynamic optimization problems (DOPs) when they are enhanced to maintain diversity. DOPs are important due to their similarities to many real-world applications. Several approaches have been integrated with ACO to improve their performance in DOPs, where memory-based approaches and immigrants schemes have shown good results on different variations of the dynamic travelling salesman problem (DTSP). In this paper, we consider a novel variation of DTSP where traffic jams occur in a cyclic pattern. This means that old environments will re-appear in the future. A hybrid method that combines memory and immigrants schemes is proposed into ACO to address this kind of DTSPs. The memory-based approach is useful to directly move the population to promising areas in the new environment by using solutions stored in the memory. The immigrants scheme is useful to maintain the diversity within the population. The experimental results based on different test cases of the DTSP show that the memory based immigrants scheme enhances the performance of ACO in cyclic dynamic environments.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/2

    Learning behavior in abstract memory schemes for dynamic optimization problems

    Get PDF
    This is the post-print version of this article. The official article can be accessed from the link below - Copyright @ 2009 Springer VerlagIntegrating memory into evolutionary algorithms is one major approach to enhance their performance in dynamic environments. An abstract memory scheme has been recently developed for evolutionary algorithms in dynamic environments, where the abstraction of good solutions is stored in the memory instead of good solutions themselves to improve future problem solving. This paper further investigates this abstract memory with a focus on understanding the relationship between learning and memory, which is an important but poorly studied issue for evolutionary algorithms in dynamic environments. The experimental study shows that the abstract memory scheme enables learning processes and hence efficiently improves the performance of evolutionary algorithms in dynamic environments.The work by S. Yang was supported by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/1

    Metaheuristics for multiobjective optimisation: Cooperative approaches, uncertainty handling and application in logistics

    Get PDF
    International audienceThis is a summary of the author's PhD thesis supervised by Laetitia Jourdan and El-Ghazali Talbi and defended on 8 December 2009 at the UniversitƩ Lille 1. The thesis is written in French and is available from http://sites.google.com/site/arnaudliefooghe/. This work deals with the design, implementation and experimental analysis of metaheuristics for solving multiobjective optimisation problems, with a particular interest on hard and large combinatorial problems from the field of logistics. After focusing on a unified view of multiobjective metaheuristics, we propose new cooperative, adaptive and parallel approaches. The performance of these methods are experimented on a scheduling and a routing problem involving two or three objective functions. We finally discuss how to adapt such metaheuristics during the search process in order to handle uncertainty that may occur from many different sources
    • ā€¦
    corecore