20,799 research outputs found
Towards a Theory-Guided Benchmarking Suite for Discrete Black-Box Optimization Heuristics: Profiling EA Variants on OneMax and LeadingOnes
Theoretical and empirical research on evolutionary computation methods
complement each other by providing two fundamentally different approaches
towards a better understanding of black-box optimization heuristics. In
discrete optimization, both streams developed rather independently of each
other, but we observe today an increasing interest in reconciling these two
sub-branches. In continuous optimization, the COCO (COmparing Continuous
Optimisers) benchmarking suite has established itself as an important platform
that theoreticians and practitioners use to exchange research ideas and
questions. No widely accepted equivalent exists in the research domain of
discrete black-box optimization.
Marking an important step towards filling this gap, we adjust the COCO
software to pseudo-Boolean optimization problems, and obtain from this a
benchmarking environment that allows a fine-grained empirical analysis of
discrete black-box heuristics. In this documentation we demonstrate how this
test bed can be used to profile the performance of evolutionary algorithms.
More concretely, we study the optimization behavior of several EA
variants on the two benchmark problems OneMax and LeadingOnes. This comparison
motivates a refined analysis for the optimization time of the EA
on LeadingOnes
Runtime Analysis for Self-adaptive Mutation Rates
We propose and analyze a self-adaptive version of the
evolutionary algorithm in which the current mutation rate is part of the
individual and thus also subject to mutation. A rigorous runtime analysis on
the OneMax benchmark function reveals that a simple local mutation scheme for
the rate leads to an expected optimization time (number of fitness evaluations)
of when is at least for
some constant . For all values of , this
performance is asymptotically best possible among all -parallel
mutation-based unbiased black-box algorithms.
Our result shows that self-adaptation in evolutionary computation can find
complex optimal parameter settings on the fly. At the same time, it proves that
a relatively complicated self-adjusting scheme for the mutation rate proposed
by Doerr, Gie{\ss}en, Witt, and Yang~(GECCO~2017) can be replaced by our simple
endogenous scheme.
On the technical side, the paper contributes new tools for the analysis of
two-dimensional drift processes arising in the analysis of dynamic parameter
choices in EAs, including bounds on occupation probabilities in processes with
non-constant drift
Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings
While evolutionary algorithms are known to be very successful for a broad
range of applications, the algorithm designer is often left with many
algorithmic choices, for example, the size of the population, the mutation
rates, and the crossover rates of the algorithm. These parameters are known to
have a crucial influence on the optimization time, and thus need to be chosen
carefully, a task that often requires substantial efforts. Moreover, the
optimal parameters can change during the optimization process. It is therefore
of great interest to design mechanisms that dynamically choose best-possible
parameters. An example for such an update mechanism is the one-fifth success
rule for step-size adaption in evolutionary strategies. While in continuous
domains this principle is well understood also from a mathematical point of
view, no comparable theory is available for problems in discrete domains.
In this work we show that the one-fifth success rule can be effective also in
discrete settings. We regard the ~GA proposed in
[Doerr/Doerr/Ebel: From black-box complexity to designing new genetic
algorithms, TCS 2015]. We prove that if its population size is chosen according
to the one-fifth success rule then the expected optimization time on
\textsc{OneMax} is linear. This is better than what \emph{any} static
population size can achieve and is asymptotically optimal also among
all adaptive parameter choices.Comment: This is the full version of a paper that is to appear at GECCO 201
Complexity of evolutionary equilibria in static fitness landscapes
A fitness landscape is a genetic space -- with two genotypes adjacent if they
differ in a single locus -- and a fitness function. Evolutionary dynamics
produce a flow on this landscape from lower fitness to higher; reaching
equilibrium only if a local fitness peak is found. I use computational
complexity to question the common assumption that evolution on static fitness
landscapes can quickly reach a local fitness peak. I do this by showing that
the popular NK model of rugged fitness landscapes is PLS-complete for K >= 2;
the reduction from Weighted 2SAT is a bijection on adaptive walks, so there are
NK fitness landscapes where every adaptive path from some vertices is of
exponential length. Alternatively -- under the standard complexity theoretic
assumption that there are problems in PLS not solvable in polynomial time --
this means that there are no evolutionary dynamics (known, or to be discovered,
and not necessarily following adaptive paths) that can converge to a local
fitness peak on all NK landscapes with K = 2. Applying results from the
analysis of simplex algorithms, I show that there exist single-peaked
landscapes with no reciprocal sign epistasis where the expected length of an
adaptive path following strong selection weak mutation dynamics is
even though an adaptive path to the optimum of length less
than n is available from every vertex. The technical results are written to be
accessible to mathematical biologists without a computer science background,
and the biological literature is summarized for the convenience of
non-biologists with the aim to open a constructive dialogue between the two
disciplines.Comment: 14 pages, 3 figure
Offspring Population Size Matters when Comparing Evolutionary Algorithms with Self-Adjusting Mutation Rates
We analyze the performance of the 2-rate Evolutionary Algorithm
(EA) with self-adjusting mutation rate control, its 3-rate counterpart, and a
~EA variant using multiplicative update rules on the OneMax
problem. We compare their efficiency for offspring population sizes ranging up
to and problem sizes up to .
Our empirical results show that the ranking of the algorithms is very
consistent across all tested dimensions, but strongly depends on the population
size. While for small values of the 2-rate EA performs best, the
multiplicative updates become superior for starting for some threshold value of
between 50 and 100. Interestingly, for population sizes around 50,
the ~EA with static mutation rates performs on par with the best
of the self-adjusting algorithms.
We also consider how the lower bound for the mutation rate
influences the efficiency of the algorithms. We observe that for the 2-rate EA
and the EA with multiplicative update rules the more generous bound
gives better results than when is
small. For both algorithms the situation reverses for large~.Comment: To appear at Genetic and Evolutionary Computation Conference
(GECCO'19). v2: minor language revisio
- …