277 research outputs found
Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings
While evolutionary algorithms are known to be very successful for a broad
range of applications, the algorithm designer is often left with many
algorithmic choices, for example, the size of the population, the mutation
rates, and the crossover rates of the algorithm. These parameters are known to
have a crucial influence on the optimization time, and thus need to be chosen
carefully, a task that often requires substantial efforts. Moreover, the
optimal parameters can change during the optimization process. It is therefore
of great interest to design mechanisms that dynamically choose best-possible
parameters. An example for such an update mechanism is the one-fifth success
rule for step-size adaption in evolutionary strategies. While in continuous
domains this principle is well understood also from a mathematical point of
view, no comparable theory is available for problems in discrete domains.
In this work we show that the one-fifth success rule can be effective also in
discrete settings. We regard the ~GA proposed in
[Doerr/Doerr/Ebel: From black-box complexity to designing new genetic
algorithms, TCS 2015]. We prove that if its population size is chosen according
to the one-fifth success rule then the expected optimization time on
\textsc{OneMax} is linear. This is better than what \emph{any} static
population size can achieve and is asymptotically optimal also among
all adaptive parameter choices.Comment: This is the full version of a paper that is to appear at GECCO 201
Runtime Analysis for Self-adaptive Mutation Rates
We propose and analyze a self-adaptive version of the
evolutionary algorithm in which the current mutation rate is part of the
individual and thus also subject to mutation. A rigorous runtime analysis on
the OneMax benchmark function reveals that a simple local mutation scheme for
the rate leads to an expected optimization time (number of fitness evaluations)
of when is at least for
some constant . For all values of , this
performance is asymptotically best possible among all -parallel
mutation-based unbiased black-box algorithms.
Our result shows that self-adaptation in evolutionary computation can find
complex optimal parameter settings on the fly. At the same time, it proves that
a relatively complicated self-adjusting scheme for the mutation rate proposed
by Doerr, Gie{\ss}en, Witt, and Yang~(GECCO~2017) can be replaced by our simple
endogenous scheme.
On the technical side, the paper contributes new tools for the analysis of
two-dimensional drift processes arising in the analysis of dynamic parameter
choices in EAs, including bounds on occupation probabilities in processes with
non-constant drift
Offspring Population Size Matters when Comparing Evolutionary Algorithms with Self-Adjusting Mutation Rates
We analyze the performance of the 2-rate Evolutionary Algorithm
(EA) with self-adjusting mutation rate control, its 3-rate counterpart, and a
~EA variant using multiplicative update rules on the OneMax
problem. We compare their efficiency for offspring population sizes ranging up
to and problem sizes up to .
Our empirical results show that the ranking of the algorithms is very
consistent across all tested dimensions, but strongly depends on the population
size. While for small values of the 2-rate EA performs best, the
multiplicative updates become superior for starting for some threshold value of
between 50 and 100. Interestingly, for population sizes around 50,
the ~EA with static mutation rates performs on par with the best
of the self-adjusting algorithms.
We also consider how the lower bound for the mutation rate
influences the efficiency of the algorithms. We observe that for the 2-rate EA
and the EA with multiplicative update rules the more generous bound
gives better results than when is
small. For both algorithms the situation reverses for large~.Comment: To appear at Genetic and Evolutionary Computation Conference
(GECCO'19). v2: minor language revisio
Self-Adjusting Population Sizes for Non-Elitist Evolutionary Algorithms: Why Success Rates Matter
Evolutionary algorithms (EAs) are general-purpose optimisers that come with
several parameters like the sizes of parent and offspring populations or the
mutation rate. It is well known that the performance of EAs may depend
drastically on these parameters. Recent theoretical studies have shown that
self-adjusting parameter control mechanisms that tune parameters during the
algorithm run can provably outperform the best static parameters in EAs on
discrete problems. However, the majority of these studies concerned elitist EAs
and we do not have a clear answer on whether the same mechanisms can be applied
for non-elitist EAs.
We study one of the best-known parameter control mechanisms, the one-fifth
success rule, to control the offspring population size in the
non-elitist EA. It is known that the EA has a sharp
threshold with respect to the choice of where the expected runtime on
the benchmark function OneMax changes from polynomial to exponential time.
Hence, it is not clear whether parameter control mechanisms are able to find
and maintain suitable values of .
For OneMax we show that the answer crucially depends on the success rate
(i.e. a one--th success rule). We prove that, if the success rate is
appropriately small, the self-adjusting EA optimises OneMax in
expected generations and expected evaluations, the best
possible runtime for any unary unbiased black-box algorithm. A small success
rate is crucial: we also show that if the success rate is too large, the
algorithm has an exponential runtime on OneMax and other functions with similar
characteristics.Comment: This is an extended version of a paper that appeared in the
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO
2021
Self-adjusting Population Sizes for Non-elitist Evolutionary Algorithms:Why Success Rates Matter
Evolutionary algorithms (EAs) are general-purpose optimisers that come with several
parameters like the sizes of parent and offspring populations or the mutation rate. It is
well known that the performance of EAs may depend drastically on these parameters.
Recent theoretical studies have shown that self-adjusting parameter control mechanisms that tune parameters during the algorithm run can provably outperform the best
static parameters in EAs on discrete problems. However, the majority of these studies
concerned elitist EAs and we do not have a clear answer on whether the same mechanisms can be applied for non-elitist EAs. We study one of the best-known parameter
control mechanisms, the one-fifth success rule, to control the offspring population
size λ in the non-elitist (1, λ) EA. It is known that the (1, λ) EA has a sharp threshold
with respect to the choice of λ where the expected runtime on the benchmark function OneMax changes from polynomial to exponential time. Hence, it is not clear
whether parameter control mechanisms are able to find and maintain suitable values
of λ. For OneMax we show that the answer crucially depends on the success rate s
(i. e. a one-(s + 1)-th success rule). We prove that, if the success rate is appropriately
small, the self-adjusting (1, λ) EA optimises OneMax in O(n) expected generations
and O(n log n) expected evaluations, the best possible runtime for any unary unbiased
black-box algorithm. A small success rate is crucial: we also show that if the success
rate is too large, the algorithm has an exponential runtime on OneMax and other
functions with similar characteristics
On the choice of the parameter control mechanism in the (1+(λ, λ)) genetic algorithm
The self-adjusting (1 + (λ, λ)) GA is the best known genetic algorithm for problems with a good fitness-distance correlation as in OneMax. It uses a parameter control mechanism for the parameter λ that governs the mutation strength and the number of offspring. However, on multimodal problems, the parameter control mechanism tends to increase λ uncontrollably.
We study this problem and possible solutions to it using rigorous runtime analysis for the standard Jumpk benchmark problem class. The original algorithm behaves like a (1+n) EA whenever the maximum value λ = n is reached. This is ineffective for problems where large jumps are required. Capping λ at smaller values is beneficial for such problems. Finally, resetting λ to 1 allows the parameter to cycle through the parameter space. We show that this strategy is effective for all Jumpk problems: the (1 + (λ, λ)) GA performs as well as the (1 + 1) EA with the optimal mutation rate and fast evolutionary algorithms, apart from a small polynomial overhead.
Along the way, we present new general methods for bounding the runtime of the (1 + (λ, λ)) GA that allows to translate existing runtime bounds from the (1 + 1) EA to the self-adjusting (1 + (λ, λ)) GA. Our methods are easy to use and give upper bounds for novel classes of functions
Self-adaptation in non-elitist evolutionary algorithms on discrete problems with unknown structure
A key challenge to make effective use of evolutionary algorithms is to choose
appropriate settings for their parameters. However, the appropriate parameter
setting generally depends on the structure of the optimisation problem, which
is often unknown to the user. Non-deterministic parameter control mechanisms
adjust parameters using information obtained from the evolutionary process.
Self-adaptation -- where parameter settings are encoded in the chromosomes of
individuals and evolve through mutation and crossover -- is a popular parameter
control mechanism in evolutionary strategies. However, there is little
theoretical evidence that self-adaptation is effective, and self-adaptation has
largely been ignored by the discrete evolutionary computation community.
Here we show through a theoretical runtime analysis that a non-elitist,
discrete evolutionary algorithm which self-adapts its mutation rate not only
outperforms EAs which use static mutation rates on \leadingones, but also
improves asymptotically on an EA using a state-of-the-art control mechanism.
The structure of this problem depends on a parameter , which is \emph{a
priori} unknown to the algorithm, and which is needed to appropriately set a
fixed mutation rate. The self-adaptive EA achieves the same asymptotic runtime
as if this parameter was known to the algorithm beforehand, which is an
asymptotic speedup for this problem compared to all other EAs previously
studied. An experimental study of how the mutation-rates evolve show that they
respond adequately to a diverse range of problem structures.
These results suggest that self-adaptation should be adopted more broadly as
a parameter control mechanism in discrete, non-elitist evolutionary algorithms.Comment: To appear in IEEE Transactions of Evolutionary Computatio
- …