3 research outputs found
Evolutionary Algorithms with Self-adjusting Asymmetric Mutation
Evolutionary Algorithms (EAs) and other randomized search heuristics are
often considered as unbiased algorithms that are invariant with respect to
different transformations of the underlying search space. However, if a certain
amount of domain knowledge is available the use of biased search operators in
EAs becomes viable. We consider a simple (1+1) EA for binary search spaces and
analyze an asymmetric mutation operator that can treat zero- and one-bits
differently. This operator extends previous work by Jansen and Sudholt (ECJ
18(1), 2010) by allowing the operator asymmetry to vary according to the
success rate of the algorithm. Using a self-adjusting scheme that learns an
appropriate degree of asymmetry, we show improved runtime results on the class
of functions OneMax describing the number of matching bits with a fixed
target .Comment: 16 pages. An extended abstract of this paper will be published in the
proceedings of PPSN 202
Self-Adjusting Evolutionary Algorithms for Multimodal Optimization
Recent theoretical research has shown that self-adjusting and self-adaptive
mechanisms can provably outperform static settings in evolutionary algorithms
for binary search spaces. However, the vast majority of these studies focuses
on unimodal functions which do not require the algorithm to flip several bits
simultaneously to make progress. In fact, existing self-adjusting algorithms
are not designed to detect local optima and do not have any obvious benefit to
cross large Hamming gaps.
We suggest a mechanism called stagnation detection that can be added as a
module to existing evolutionary algorithms (both with and without prior
self-adjusting algorithms). Added to a simple (1+1) EA, we prove an expected
runtime on the well-known Jump benchmark that corresponds to an asymptotically
optimal parameter setting and outperforms other mechanisms for multimodal
optimization like heavy-tailed mutation. We also investigate the module in the
context of a self-adjusting (1+) EA and show that it combines the
previous benefits of this algorithm on unimodal problems with more efficient
multimodal optimization.
To explore the limitations of the approach, we additionally present an
example where both self-adjusting mechanisms, including stagnation detection,
do not help to find a beneficial setting of the mutation rate. Finally, we
investigate our module for stagnation detection experimentally.Comment: 26 pages. Full version of a paper appearing at GECCO 202