10 research outputs found

    Theoretical Study of Optimizing Rugged Landscapes with the cGA

    Full text link
    Estimation of distribution algorithms (EDAs) provide a distribution - based approach for optimization which adapts its probability distribution during the run of the algorithm. We contribute to the theoretical understanding of EDAs and point out that their distribution approach makes them more suitable to deal with rugged fitness landscapes than classical local search algorithms. Concretely, we make the OneMax function rugged by adding noise to each fitness value. The cGA can nevertheless find solutions with n(1 - \epsilon) many 1s, even for high variance of noise. In contrast to this, RLS and the (1+1) EA, with high probability, only find solutions with n(1/2+o(1)) many 1s, even for noise with small variance.Comment: 17 pages, 1 figure, PPSN 202

    Analysis of Noisy Evolutionary Optimization When Sampling Fails

    Full text link
    In noisy evolutionary optimization, sampling is a common strategy to deal with noise. By the sampling strategy, the fitness of a solution is evaluated multiple times (called \emph{sample size}) independently, and its true fitness is then approximated by the average of these evaluations. Previous studies on sampling are mainly empirical. In this paper, we first investigate the effect of sample size from a theoretical perspective. By analyzing the (1+1)-EA on the noisy LeadingOnes problem, we show that as the sample size increases, the running time can reduce from exponential to polynomial, but then return to exponential. This suggests that a proper sample size is crucial in practice. Then, we investigate what strategies can work when sampling with any fixed sample size fails. By two illustrative examples, we prove that using parent or offspring populations can be better. Finally, we construct an artificial noisy example to show that when using neither sampling nor populations is effective, adaptive sampling (i.e., sampling with an adaptive sample size) can work. This, for the first time, provides a theoretical support for the use of adaptive sampling

    The benefits and limitations of voting mechanisms in evolutionary optimisation

    Get PDF

    Running Time Analysis of the (1+1)-EA for Robust Linear Optimization

    Full text link
    Evolutionary algorithms (EAs) have found many successful real-world applications, where the optimization problems are often subject to a wide range of uncertainties. To understand the practical behaviors of EAs theoretically, there are a series of efforts devoted to analyzing the running time of EAs for optimization under uncertainties. Existing studies mainly focus on noisy and dynamic optimization, while another common type of uncertain optimization, i.e., robust optimization, has been rarely touched. In this paper, we analyze the expected running time of the (1+1)-EA solving robust linear optimization problems (i.e., linear problems under robust scenarios) with a cardinality constraint kk. Two common robust scenarios, i.e., deletion-robust and worst-case, are considered. Particularly, we derive tight ranges of the robust parameter dd or budget kk allowing the (1+1)-EA to find an optimal solution in polynomial running time, which disclose the potential of EAs for robust optimization.Comment: 17 pages, 1 tabl

    Analysis of Evolutionary Algorithms in Dynamic and Stochastic Environments

    Full text link
    Many real-world optimization problems occur in environments that change dynamically or involve stochastic components. Evolutionary algorithms and other bio-inspired algorithms have been widely applied to dynamic and stochastic problems. This survey gives an overview of major theoretical developments in the area of runtime analysis for these problems. We review recent theoretical studies of evolutionary algorithms and ant colony optimization for problems where the objective functions or the constraints change over time. Furthermore, we consider stochastic problems under various noise models and point out some directions for future research.Comment: This book chapter is to appear in the book "Theory of Randomized Search Heuristics in Discrete Search Spaces", which is edited by Benjamin Doerr and Frank Neumann and is scheduled to be published by Springer in 201

    Memetic algorithms outperform evolutionary algorithms in multimodal optimisation

    Get PDF
    Memetic algorithms integrate local search into an evolutionary algorithm to combine the advantages of rapid exploitation and global optimisation. We provide a rigorous runtime analysis of memetic algorithms on the Hurdle problem, a landscape class of tunable difficulty with a “big valley structure”, a characteristic feature of many hard combinatorial optimisation problems. A parameter called hurdle width describes the length of fitness valleys that need to be overcome. We show that the expected runtime of plain evolutionary algorithms like the (1+1) EA increases steeply with the hurdle width, yielding superpolynomial times to find the optimum, whereas a simple memetic algorithm, (1+1) MA, only needs polynomial expected time. Surprisingly, while increasing the hurdle width makes the problem harder for evolutionary algorithms, it becomes easier for memetic algorithms. We further give the first rigorous proof that crossover can decrease the expected runtime in memetic algorithms. A (2+1) MA using mutation, crossover and local search outperforms any other combination of these operators. Our results demonstrate the power of memetic algorithms for problems with big valley structures and the benefits of hybridising multiple search operators
    corecore