19 research outputs found
Offspring Population Size Matters when Comparing Evolutionary Algorithms with Self-Adjusting Mutation Rates
We analyze the performance of the 2-rate Evolutionary Algorithm
(EA) with self-adjusting mutation rate control, its 3-rate counterpart, and a
~EA variant using multiplicative update rules on the OneMax
problem. We compare their efficiency for offspring population sizes ranging up
to and problem sizes up to .
Our empirical results show that the ranking of the algorithms is very
consistent across all tested dimensions, but strongly depends on the population
size. While for small values of the 2-rate EA performs best, the
multiplicative updates become superior for starting for some threshold value of
between 50 and 100. Interestingly, for population sizes around 50,
the ~EA with static mutation rates performs on par with the best
of the self-adjusting algorithms.
We also consider how the lower bound for the mutation rate
influences the efficiency of the algorithms. We observe that for the 2-rate EA
and the EA with multiplicative update rules the more generous bound
gives better results than when is
small. For both algorithms the situation reverses for large~.Comment: To appear at Genetic and Evolutionary Computation Conference
(GECCO'19). v2: minor language revisio
When Does Hillclimbing Fail on Monotone Functions: An entropy compression argument
Hillclimbing is an essential part of any optimization algorithm. An important
benchmark for hillclimbing algorithms on pseudo-Boolean functions are (strictly) montone functions, on which a surprising number
of hillclimbers fail to be efficient. For example, the -Evolutionary
Algorithm is a standard hillclimber which flips each bit independently with
probability in each round. Perhaps surprisingly, this algorithm shows a
phase transition: it optimizes any monotone pseudo-boolean function in
quasilinear time if , but there are monotone functions for which the
algorithm needs exponential time if . But so far it was unclear whether
the threshold is at .
In this paper we show how Moser's entropy compression argument can be adapted
to this situation, that is, we show that a long runtime would allow us to
encode the random steps of the algorithm with less bits than their entropy.
Thus there exists a such that for all the
-Evolutionary Algorithm with rate finds the optimum in steps in expectation.Comment: 14 pages, no figure
Analysing Equilibrium States for Population Diversity
Population diversity is crucial in evolutionary algorithms as it helps with
global exploration and facilitates the use of crossover. Despite many runtime
analyses showing advantages of population diversity, we have no clear picture
of how diversity evolves over time. We study how population diversity of
algorithms, measured by the sum of pairwise Hamming distances,
evolves in a fitness-neutral environment. We give an exact formula for the
drift of population diversity and show that it is driven towards an equilibrium
state. Moreover, we bound the expected time for getting close to the
equilibrium state. We find that these dynamics, including the location of the
equilibrium, are unaffected by surprisingly many algorithmic choices. All
unbiased mutation operators with the same expected number of bit flips have the
same effect on the expected diversity. Many crossover operators have no effect
at all, including all binary unbiased, respectful operators. We review
crossover operators from the literature and identify crossovers that are
neutral towards the evolution of diversity and crossovers that are not.Comment: To appear at GECCO 202
Self-Adjusting Evolutionary Algorithms for Multimodal Optimization
Recent theoretical research has shown that self-adjusting and self-adaptive
mechanisms can provably outperform static settings in evolutionary algorithms
for binary search spaces. However, the vast majority of these studies focuses
on unimodal functions which do not require the algorithm to flip several bits
simultaneously to make progress. In fact, existing self-adjusting algorithms
are not designed to detect local optima and do not have any obvious benefit to
cross large Hamming gaps.
We suggest a mechanism called stagnation detection that can be added as a
module to existing evolutionary algorithms (both with and without prior
self-adjusting algorithms). Added to a simple (1+1) EA, we prove an expected
runtime on the well-known Jump benchmark that corresponds to an asymptotically
optimal parameter setting and outperforms other mechanisms for multimodal
optimization like heavy-tailed mutation. We also investigate the module in the
context of a self-adjusting (1+) EA and show that it combines the
previous benefits of this algorithm on unimodal problems with more efficient
multimodal optimization.
To explore the limitations of the approach, we additionally present an
example where both self-adjusting mechanisms, including stagnation detection,
do not help to find a beneficial setting of the mutation rate. Finally, we
investigate our module for stagnation detection experimentally.Comment: 26 pages. Full version of a paper appearing at GECCO 202