8 research outputs found

    A Simplified Run Time Analysis of the Univariate Marginal Distribution Algorithm on LeadingOnes

    Full text link
    With elementary means, we prove a stronger run time guarantee for the univariate marginal distribution algorithm (UMDA) optimizing the LeadingOnes benchmark function in the desirable regime with low genetic drift. If the population size is at least quasilinear, then, with high probability, the UMDA samples the optimum within a number of iterations that is linear in the problem size divided by the logarithm of the UMDA's selection rate. This improves over the previous guarantee, obtained by Dang and Lehre (2015) via the deep level-based population method, both in terms of the run time and by demonstrating further run time gains from small selection rates. With similar arguments as in our upper-bound analysis, we also obtain the first lower bound for this problem. Under similar assumptions, we prove that a bound that matches our upper bound up to constant factors holds with high probability

    On Non-Elitist Evolutionary Algorithms Optimizing Fitness Functions with a Plateau

    Full text link
    We consider the expected runtime of non-elitist evolutionary algorithms (EAs), when they are applied to a family of fitness functions with a plateau of second-best fitness in a Hamming ball of radius r around a unique global optimum. On one hand, using the level-based theorems, we obtain polynomial upper bounds on the expected runtime for some modes of non-elitist EA based on unbiased mutation and the bitwise mutation in particular. On the other hand, we show that the EA with fitness proportionate selection is inefficient if the bitwise mutation is used with the standard settings of mutation probability.Comment: 14 pages, accepted for proceedings of Mathematical Optimization Theory and Operations Research (MOTOR 2020). arXiv admin note: text overlap with arXiv:1908.0868

    On the choice of the parameter control mechanism in the (1+(λ, λ)) genetic algorithm

    Get PDF
    The self-adjusting (1 + (λ, λ)) GA is the best known genetic algorithm for problems with a good fitness-distance correlation as in OneMax. It uses a parameter control mechanism for the parameter λ that governs the mutation strength and the number of offspring. However, on multimodal problems, the parameter control mechanism tends to increase λ uncontrollably. We study this problem and possible solutions to it using rigorous runtime analysis for the standard Jumpk benchmark problem class. The original algorithm behaves like a (1+n) EA whenever the maximum value λ = n is reached. This is ineffective for problems where large jumps are required. Capping λ at smaller values is beneficial for such problems. Finally, resetting λ to 1 allows the parameter to cycle through the parameter space. We show that this strategy is effective for all Jumpk problems: the (1 + (λ, λ)) GA performs as well as the (1 + 1) EA with the optimal mutation rate and fast evolutionary algorithms, apart from a small polynomial overhead. Along the way, we present new general methods for bounding the runtime of the (1 + (λ, λ)) GA that allows to translate existing runtime bounds from the (1 + 1) EA to the self-adjusting (1 + (λ, λ)) GA. Our methods are easy to use and give upper bounds for novel classes of functions

    Self-adaptation Can Help Evolutionary Algorithms Track Dynamic Optima

    Get PDF

    The Univariate Marginal Distribution Algorithm Copes Well With Deception and Epistasis

    Full text link
    In their recent work, Lehre and Nguyen (FOGA 2019) show that the univariate marginal distribution algorithm (UMDA) needs time exponential in the parent populations size to optimize the DeceptiveLeadingBlocks (DLB) problem. They conclude from this result that univariate EDAs have difficulties with deception and epistasis. In this work, we show that this negative finding is caused by an unfortunate choice of the parameters of the UMDA. When the population sizes are chosen large enough to prevent genetic drift, then the UMDA optimizes the DLB problem with high probability with at most λ(n2+2elnn)\lambda(\frac{n}{2} + 2 e \ln n) fitness evaluations. Since an offspring population size λ\lambda of order nlognn \log n can prevent genetic drift, the UMDA can solve the DLB problem with O(n2logn)O(n^2 \log n) fitness evaluations. In contrast, for classic evolutionary algorithms no better run time guarantee than O(n3)O(n^3) is known (which we prove to be tight for the (1+1){(1+1)} EA), so our result rather suggests that the UMDA can cope well with deception and epistatis. From a broader perspective, our result shows that the UMDA can cope better with local optima than evolutionary algorithms; such a result was previously known only for the compact genetic algorithm. Together with the lower bound of Lehre and Nguyen, our result for the first time rigorously proves that running EDAs in the regime with genetic drift can lead to drastic performance losses

    On the limitations of the univariate marginal distribution algorithm to deception and where bivariate EDAs might help

    Get PDF
    We introduce a new benchmark problem called Deceptive Leading Blocks (DLB) to rigorously study the runtime of the Univariate Marginal Distribution Algorithm (UMDA) in the presence of epistasis and deception. We show that simple Evolutionary Algorithms (EAs) outperform the UMDA unless the selective pressure μ/λ\mu/\lambda is extremely high, where μ\mu and λ\lambda are the parent and offspring population sizes, respectively. More precisely, we show that the UMDA with a parent population size of μ=Ω(logn)\mu=\Omega(\log n) has an expected runtime of eΩ(μ)e^{\Omega(\mu)} on the DLB problem assuming any selective pressure μλ141000\frac{\mu}{\lambda} \geq \frac{14}{1000}, as opposed to the expected runtime of O(nλlogλ+n3)\mathcal{O}(n\lambda\log \lambda+n^3) for the non-elitist (μ,λ) EA(\mu,\lambda)~\text{EA} with μ/λ1/e\mu/\lambda\leq 1/e. These results illustrate inherent limitations of univariate EDAs against deception and epistasis, which are common characteristics of real-world problems. In contrast, empirical evidence reveals the efficiency of the bi-variate MIMIC algorithm on the DLB problem. Our results suggest that one should consider EDAs with more complex probabilistic models when optimising problems with some degree of epistasis and deception.Comment: To appear in the 15th ACM/SIGEVO Workshop on Foundations of Genetic Algorithms (FOGA XV), Potsdam, German

    Runtime Analysis of Success-Based Parameter Control Mechanisms for Evolutionary Algorithms on Multimodal Problems

    Get PDF
    Evolutionary algorithms are simple general-purpose optimisers often used to solve complex engineering and design problems. They mimic the process of natural evolution: they use a population of possible solutions to a problem that evolves by mutating and recombining solutions, identifying increasingly better solutions over time. Evolutionary algorithms have been applied to a broad range of problems in various disciplines with remarkable success. However, the reasons behind their success are often elusive: their performance often depends crucially, and unpredictably, on their parameter settings. It is, furthermore, well known that there are no globally good parameters, that is, the correct parameters for one problem may differ substantially to the parameters needed for another, making it harder to translate previous successfully implemented parameters to new problems. Therefore, understanding how to properly select the parameters is an important but challenging task. This is commonly known as the parameter selection problem. A promising solution to this problem is the use of automated dynamic parameter selection schemes (parameter control) that allow evolutionary algorithms to identify and continuously track optimal parameters throughout the course of evolution without human intervention. In recent years the study of parameter control mechanisms in evolutionary algorithms has emerged as a very fruitful research area. However, most existing runtime analyses focus on simple problems with benign characteristics, for which fixed parameter settings already run efficiently and only moderate performance gains were shown. The aim of this thesis is to understand how parameter control mechanisms can be used on more complex and challenging problems with many local optima (multimodal problems) to speed up optimisation. We use advanced methods from the analysis of algorithms and probability theory to evaluate the performance of evolutionary algorithms, estimating the expected time until an algorithm finds satisfactory solutions for illustrative and relevant optimisation problems as a vital stepping stone towards designing more efficient evolutionary algorithms. We first analyse current parameter control mechanisms on multimodal problems to understand their strengths and weaknesses. Subsequently we use this knowledge to design parameter control mechanisms that mitigate the weaknesses of current mechanisms while maintaining their strengths. Finally, we show with theoretical and empirical analyses that these enhanced parameter control mechanisms are able to outperform the best fixed parameter settings on multimodal optimisation

    Multiplicative Up-Drift

    No full text
    International audienceAbstract Drift analysis aims at translating the expected progress of an evolutionary algorithm (or more generally, a random process) into a probabilistic guarantee on its run time (hitting time). So far, drift arguments have been successfully employed in the rigorous analysis of evolutionary algorithms, however, only for the situation that the progress is constant or becomes weaker when approaching the target. Motivated by questions like how fast fit individuals take over a population, we analyze random processes exhibiting a (1+δ)(1+\delta ) ( 1 + δ ) -multiplicative growth in expectation. We prove a drift theorem translating this expected progress into a hitting time. This drift theorem gives a simple and insightful proof of the level-based theorem first proposed by Lehre (2011). Our version of this theorem has, for the first time, the best-possible near-linear dependence on 1/δ1/\delta 1 / δ (the previous results had an at least near-quadratic dependence), and it only requires a population size near-linear in δ\delta δ (this was super-quadratic in previous results). These improvements immediately lead to stronger run time guarantees for a number of applications. We also discuss the case of large δ\delta δ and show stronger results for this setting
    corecore