27 research outputs found

    Self-adaptation of Mutation Rates in Non-elitist Populations

    Get PDF
    The runtime of evolutionary algorithms (EAs) depends critically on their parameter settings, which are often problem-specific. Automated schemes for parameter tuning have been developed to alleviate the high costs of manual parameter tuning. Experimental results indicate that self-adaptation, where parameter settings are encoded in the genomes of individuals, can be effective in continuous optimisation. However, results in discrete optimisation have been less conclusive. Furthermore, a rigorous runtime analysis that explains how self-adaptation can lead to asymptotic speedups has been missing. This paper provides the first such analysis for discrete, population-based EAs. We apply level-based analysis to show how a self-adaptive EA is capable of fine-tuning its mutation rate, leading to exponential speedups over EAs using fixed mutation rates.Comment: To appear in the Proceedings of the 14th International Conference on Parallel Problem Solving from Nature (PPSN

    On Non-Elitist Evolutionary Algorithms Optimizing Fitness Functions with a Plateau

    Full text link
    We consider the expected runtime of non-elitist evolutionary algorithms (EAs), when they are applied to a family of fitness functions with a plateau of second-best fitness in a Hamming ball of radius r around a unique global optimum. On one hand, using the level-based theorems, we obtain polynomial upper bounds on the expected runtime for some modes of non-elitist EA based on unbiased mutation and the bitwise mutation in particular. On the other hand, we show that the EA with fitness proportionate selection is inefficient if the bitwise mutation is used with the standard settings of mutation probability.Comment: 14 pages, accepted for proceedings of Mathematical Optimization Theory and Operations Research (MOTOR 2020). arXiv admin note: text overlap with arXiv:1908.0868

    Fast Mutation in Crossover-based Algorithms

    Full text link
    The heavy-tailed mutation operator proposed in Doerr, Le, Makhmara, and Nguyen (GECCO 2017), called \emph{fast mutation} to agree with the previously used language, so far was proven to be advantageous only in mutation-based algorithms. There, it can relieve the algorithm designer from finding the optimal mutation rate and nevertheless obtain a performance close to the one that the optimal mutation rate gives. In this first runtime analysis of a crossover-based algorithm using a heavy-tailed choice of the mutation rate, we show an even stronger impact. For the (1+(,))(1+(\lambda,\lambda)) genetic algorithm optimizing the OneMax benchmark function, we show that with a heavy-tailed mutation rate a linear runtime can be achieved. This is asymptotically faster than what can be obtained with any static mutation rate, and is asymptotically equivalent to the runtime of the self-adjusting version of the parameters choice of the (1+(,))(1+(\lambda,\lambda)) genetic algorithm. This result is complemented by an empirical study which shows the effectiveness of the fast mutation also on random satisfiable Max-3SAT instances.Comment: This is a version of the same paper presented at GECCO 2020 completed with the proofs which were missing because of the page limi

    Self-adaptation of mutation rates in non-elitist populations

    Get PDF
    The runtime of evolutionary algorithms (EAs) depends critically on their parameter settings, which are often problem-specific. Automated schemes for parameter tuning have been developed to alleviate the high costs of manual parameter tuning. Experimental results indicate that self-adaptation, where parameter settings are encoded in the genomes of individuals, can be effective in continuous optimisation. However, results in discrete optimisation have been less conclusive. Furthermore, a rigorous runtime analysis that explains how self adaptation can lead to asymptotic speedups has been missing. This paper provides the first such analysis for discrete, population-based EAs. We apply level-based analysis to show how a self-adaptive EA is capable of fine-tuning its mutation rate, leading to exponential speedups over EAs using fixed mutation rates

    A Self-adaptive Coevolutionary Algorithm

    Get PDF
    Coevolutionary algorithms are helpful computational abstractions of adversarial behavior and they demonstrate multiple ways that populations of competing adversaries influence one another. We introduce the ability for each competitor's mutation rate to evolve through self-adaptation. Because dynamic environments are frequently addressed with self-adaptation, we set up dynamic problem environments to investigate the impact of this ability. For a simple bilinear problem, a sensitivity analysis of the adaptive method's parameters reveals that it is robust over a range of multiplicative rate factors, when the rate is changed up or down with equal probability. An empirical study determines that each population's mutation rates converge to values close to the error threshold. Mutation rate dynamics are complex when both populations adapt their rates. Large scale empirical self-adaptation results reveal that both reasonable solutions and rates can be found. This addresses the challenge of selecting ideal static mutation rates in coevolutionary algorithms. The algorithm's payoffs are also robust. They are rarely poor and frequently they are as high as the payoff of the static rate to which they converge. On rare runs, they are higher

    Self-adaptation Can Help Evolutionary Algorithms Track Dynamic Optima

    Get PDF

    Analysing Equilibrium States for Population Diversity

    Full text link
    Population diversity is crucial in evolutionary algorithms as it helps with global exploration and facilitates the use of crossover. Despite many runtime analyses showing advantages of population diversity, we have no clear picture of how diversity evolves over time. We study how population diversity of (+1)(\mu+1) algorithms, measured by the sum of pairwise Hamming distances, evolves in a fitness-neutral environment. We give an exact formula for the drift of population diversity and show that it is driven towards an equilibrium state. Moreover, we bound the expected time for getting close to the equilibrium state. We find that these dynamics, including the location of the equilibrium, are unaffected by surprisingly many algorithmic choices. All unbiased mutation operators with the same expected number of bit flips have the same effect on the expected diversity. Many crossover operators have no effect at all, including all binary unbiased, respectful operators. We review crossover operators from the literature and identify crossovers that are neutral towards the evolution of diversity and crossovers that are not.Comment: To appear at GECCO 202
    corecore