46 research outputs found
Offspring Population Size Matters when Comparing Evolutionary Algorithms with Self-Adjusting Mutation Rates
We analyze the performance of the 2-rate Evolutionary Algorithm
(EA) with self-adjusting mutation rate control, its 3-rate counterpart, and a
~EA variant using multiplicative update rules on the OneMax
problem. We compare their efficiency for offspring population sizes ranging up
to and problem sizes up to .
Our empirical results show that the ranking of the algorithms is very
consistent across all tested dimensions, but strongly depends on the population
size. While for small values of the 2-rate EA performs best, the
multiplicative updates become superior for starting for some threshold value of
between 50 and 100. Interestingly, for population sizes around 50,
the ~EA with static mutation rates performs on par with the best
of the self-adjusting algorithms.
We also consider how the lower bound for the mutation rate
influences the efficiency of the algorithms. We observe that for the 2-rate EA
and the EA with multiplicative update rules the more generous bound
gives better results than when is
small. For both algorithms the situation reverses for large~.Comment: To appear at Genetic and Evolutionary Computation Conference
(GECCO'19). v2: minor language revisio
Self-adjusting Population Sizes for Non-elitist Evolutionary Algorithms:Why Success Rates Matter
Evolutionary algorithms (EAs) are general-purpose optimisers that come with several
parameters like the sizes of parent and offspring populations or the mutation rate. It is
well known that the performance of EAs may depend drastically on these parameters.
Recent theoretical studies have shown that self-adjusting parameter control mechanisms that tune parameters during the algorithm run can provably outperform the best
static parameters in EAs on discrete problems. However, the majority of these studies
concerned elitist EAs and we do not have a clear answer on whether the same mechanisms can be applied for non-elitist EAs. We study one of the best-known parameter
control mechanisms, the one-fifth success rule, to control the offspring population
size λ in the non-elitist (1, λ) EA. It is known that the (1, λ) EA has a sharp threshold
with respect to the choice of λ where the expected runtime on the benchmark function OneMax changes from polynomial to exponential time. Hence, it is not clear
whether parameter control mechanisms are able to find and maintain suitable values
of λ. For OneMax we show that the answer crucially depends on the success rate s
(i. e. a one-(s + 1)-th success rule). We prove that, if the success rate is appropriately
small, the self-adjusting (1, λ) EA optimises OneMax in O(n) expected generations
and O(n log n) expected evaluations, the best possible runtime for any unary unbiased
black-box algorithm. A small success rate is crucial: we also show that if the success
rate is too large, the algorithm has an exponential runtime on OneMax and other
functions with similar characteristics
Self-Adjusting Evolutionary Algorithms for Multimodal Optimization
Recent theoretical research has shown that self-adjusting and self-adaptive
mechanisms can provably outperform static settings in evolutionary algorithms
for binary search spaces. However, the vast majority of these studies focuses
on unimodal functions which do not require the algorithm to flip several bits
simultaneously to make progress. In fact, existing self-adjusting algorithms
are not designed to detect local optima and do not have any obvious benefit to
cross large Hamming gaps.
We suggest a mechanism called stagnation detection that can be added as a
module to existing evolutionary algorithms (both with and without prior
self-adjusting algorithms). Added to a simple (1+1) EA, we prove an expected
runtime on the well-known Jump benchmark that corresponds to an asymptotically
optimal parameter setting and outperforms other mechanisms for multimodal
optimization like heavy-tailed mutation. We also investigate the module in the
context of a self-adjusting (1+) EA and show that it combines the
previous benefits of this algorithm on unimodal problems with more efficient
multimodal optimization.
To explore the limitations of the approach, we additionally present an
example where both self-adjusting mechanisms, including stagnation detection,
do not help to find a beneficial setting of the mutation rate. Finally, we
investigate our module for stagnation detection experimentally.Comment: 26 pages. Full version of a paper appearing at GECCO 202