35 research outputs found
Towards a Theory-Guided Benchmarking Suite for Discrete Black-Box Optimization Heuristics: Profiling EA Variants on OneMax and LeadingOnes
Theoretical and empirical research on evolutionary computation methods
complement each other by providing two fundamentally different approaches
towards a better understanding of black-box optimization heuristics. In
discrete optimization, both streams developed rather independently of each
other, but we observe today an increasing interest in reconciling these two
sub-branches. In continuous optimization, the COCO (COmparing Continuous
Optimisers) benchmarking suite has established itself as an important platform
that theoreticians and practitioners use to exchange research ideas and
questions. No widely accepted equivalent exists in the research domain of
discrete black-box optimization.
Marking an important step towards filling this gap, we adjust the COCO
software to pseudo-Boolean optimization problems, and obtain from this a
benchmarking environment that allows a fine-grained empirical analysis of
discrete black-box heuristics. In this documentation we demonstrate how this
test bed can be used to profile the performance of evolutionary algorithms.
More concretely, we study the optimization behavior of several EA
variants on the two benchmark problems OneMax and LeadingOnes. This comparison
motivates a refined analysis for the optimization time of the EA
on LeadingOnes
An Exponential Lower Bound for the Runtime of the cGA on Jump Functions
In the first runtime analysis of an estimation-of-distribution algorithm
(EDA) on the multi-modal jump function class, Hasen\"ohrl and Sutton (GECCO
2018) proved that the runtime of the compact genetic algorithm with suitable
parameter choice on jump functions with high probability is at most polynomial
(in the dimension) if the jump size is at most logarithmic (in the dimension),
and is at most exponential in the jump size if the jump size is
super-logarithmic. The exponential runtime guarantee was achieved with a
hypothetical population size that is also exponential in the jump size.
Consequently, this setting cannot lead to a better runtime.
In this work, we show that any choice of the hypothetical population size
leads to a runtime that, with high probability, is at least exponential in the
jump size. This result might be the first non-trivial exponential lower bound
for EDAs that holds for arbitrary parameter settings.Comment: To appear in the Proceedings of FOGA 2019. arXiv admin note: text
overlap with arXiv:1903.1098
Offspring Population Size Matters when Comparing Evolutionary Algorithms with Self-Adjusting Mutation Rates
We analyze the performance of the 2-rate Evolutionary Algorithm
(EA) with self-adjusting mutation rate control, its 3-rate counterpart, and a
~EA variant using multiplicative update rules on the OneMax
problem. We compare their efficiency for offspring population sizes ranging up
to and problem sizes up to .
Our empirical results show that the ranking of the algorithms is very
consistent across all tested dimensions, but strongly depends on the population
size. While for small values of the 2-rate EA performs best, the
multiplicative updates become superior for starting for some threshold value of
between 50 and 100. Interestingly, for population sizes around 50,
the ~EA with static mutation rates performs on par with the best
of the self-adjusting algorithms.
We also consider how the lower bound for the mutation rate
influences the efficiency of the algorithms. We observe that for the 2-rate EA
and the EA with multiplicative update rules the more generous bound
gives better results than when is
small. For both algorithms the situation reverses for large~.Comment: To appear at Genetic and Evolutionary Computation Conference
(GECCO'19). v2: minor language revisio
Benchmarking a Genetic Algorithm with Configurable Crossover Probability
We investigate a family of Genetic Algorithms (GAs) which
creates offspring either from mutation or by recombining two randomly chosen
parents. By scaling the crossover probability, we can thus interpolate from a
fully mutation-only algorithm towards a fully crossover-based GA. We analyze,
by empirical means, how the performance depends on the interplay of population
size and the crossover probability.
Our comparison on 25 pseudo-Boolean optimization problems reveals an
advantage of crossover-based configurations on several easy optimization tasks,
whereas the picture for more complex optimization problems is rather mixed.
Moreover, we observe that the ``fast'' mutation scheme with its are power-law
distributed mutation strengths outperforms standard bit mutation on complex
optimization tasks when it is combined with crossover, but performs worse in
the absence of crossover.
We then take a closer look at the surprisingly good performance of the
crossover-based GAs on the well-known LeadingOnes benchmark
problem. We observe that the optimal crossover probability increases with
increasing population size . At the same time, it decreases with
increasing problem dimension, indicating that the advantages of the crossover
are not visible in the asymptotic view classically applied in runtime analysis.
We therefore argue that a mathematical investigation for fixed dimensions might
help us observe effects which are not visible when focusing exclusively on
asymptotic performance bounds