6,841 research outputs found
The 1/5-th Rule with Rollbacks: On Self-Adjustment of the Population Size in the GA
Self-adjustment of parameters can significantly improve the performance of
evolutionary algorithms. A notable example is the
genetic algorithm, where the adaptation of the population size helps to achieve
the linear runtime on the OneMax problem. However, on problems which interfere
with the assumptions behind the self-adjustment procedure, its usage can lead
to performance degradation compared to static parameter choices. In particular,
the one fifth rule, which guides the adaptation in the example above, is able
to raise the population size too fast on problems which are too far away from
the perfect fitness-distance correlation.
We propose a modification of the one fifth rule in order to have less
negative impact on the performance in scenarios when the original rule reduces
the performance. Our modification, while still having a good performance on
OneMax, both theoretically and in practice, also shows better results on linear
functions with random weights and on random satisfiable MAX-SAT instances.Comment: 17 pages, 2 figures, 1 table. An extended two-page abstract of this
work will appear in proceedings of the Genetic and Evolutionary Computation
Conference, GECCO'1
Towards a Theory-Guided Benchmarking Suite for Discrete Black-Box Optimization Heuristics: Profiling EA Variants on OneMax and LeadingOnes
Theoretical and empirical research on evolutionary computation methods
complement each other by providing two fundamentally different approaches
towards a better understanding of black-box optimization heuristics. In
discrete optimization, both streams developed rather independently of each
other, but we observe today an increasing interest in reconciling these two
sub-branches. In continuous optimization, the COCO (COmparing Continuous
Optimisers) benchmarking suite has established itself as an important platform
that theoreticians and practitioners use to exchange research ideas and
questions. No widely accepted equivalent exists in the research domain of
discrete black-box optimization.
Marking an important step towards filling this gap, we adjust the COCO
software to pseudo-Boolean optimization problems, and obtain from this a
benchmarking environment that allows a fine-grained empirical analysis of
discrete black-box heuristics. In this documentation we demonstrate how this
test bed can be used to profile the performance of evolutionary algorithms.
More concretely, we study the optimization behavior of several EA
variants on the two benchmark problems OneMax and LeadingOnes. This comparison
motivates a refined analysis for the optimization time of the EA
on LeadingOnes
- …