1 research outputs found
Self-Adjusting Population Sizes for Non-Elitist Evolutionary Algorithms: Why Success Rates Matter
Evolutionary algorithms (EAs) are general-purpose optimisers that come with
several parameters like the sizes of parent and offspring populations or the
mutation rate. It is well known that the performance of EAs may depend
drastically on these parameters. Recent theoretical studies have shown that
self-adjusting parameter control mechanisms that tune parameters during the
algorithm run can provably outperform the best static parameters in EAs on
discrete problems. However, the majority of these studies concerned elitist EAs
and we do not have a clear answer on whether the same mechanisms can be applied
for non-elitist EAs.
We study one of the best-known parameter control mechanisms, the one-fifth
success rule, to control the offspring population size in the
non-elitist EA. It is known that the EA has a sharp
threshold with respect to the choice of where the expected runtime on
the benchmark function OneMax changes from polynomial to exponential time.
Hence, it is not clear whether parameter control mechanisms are able to find
and maintain suitable values of .
For OneMax we show that the answer crucially depends on the success rate
(i.e. a one--th success rule). We prove that, if the success rate is
appropriately small, the self-adjusting EA optimises OneMax in
expected generations and expected evaluations, the best
possible runtime for any unary unbiased black-box algorithm. A small success
rate is crucial: we also show that if the success rate is too large, the
algorithm has an exponential runtime on OneMax and other functions with similar
characteristics.Comment: This is an extended version of a paper that appeared in the
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO
2021