672 research outputs found
Towards a Stronger Theory for Permutation-based Evolutionary Algorithms
While the theoretical analysis of evolutionary algorithms (EAs) has made
significant progress for pseudo-Boolean optimization problems in the last 25
years, only sporadic theoretical results exist on how EAs solve
permutation-based problems.
To overcome the lack of permutation-based benchmark problems, we propose a
general way to transfer the classic pseudo-Boolean benchmarks into benchmarks
defined on sets of permutations. We then conduct a rigorous runtime analysis of
the permutation-based EA proposed by Scharnow, Tinnefeld, and Wegener
(2004) on the analogues of the \textsc{LeadingOnes} and \textsc{Jump}
benchmarks. The latter shows that, different from bit-strings, it is not only
the Hamming distance that determines how difficult it is to mutate a
permutation into another one , but also the precise cycle
structure of . For this reason, we also regard the more
symmetric scramble mutation operator. We observe that it not only leads to
simpler proofs, but also reduces the runtime on jump functions with odd jump
size by a factor of . Finally, we show that a heavy-tailed version
of the scramble operator, as in the bit-string case, leads to a speed-up of
order on jump functions with jump size~.%Comment: To appear in the proceedings of GECCO 2022. This version contains the
proofs omitted in the proceedings version for reasons of spac
An Extended Jump Functions Benchmark for the Analysis of Randomized Search Heuristics
Jump functions are the {most-studied} non-unimodal benchmark in the theory of
randomized search heuristics, in particular, evolutionary algorithms (EAs).
They have significantly improved our understanding of how EAs escape from local
optima. However, their particular structure -- to leave the local optimum one
can only jump directly to the global optimum -- raises the question of how
representative such results are.
For this reason, we propose an extended class \textsc{Jump}_{k,\delta} of
jump functions that contain a valley of low fitness of width starting
at distance from the global optimum. We prove that several previous results
extend to this more general class: for all {} and
, the optimal mutation rate for the ~EA is
, and the fast ~EA runs faster than the classical
~EA by a factor super-exponential in . However, we also observe
that some known results do not generalize: the randomized local search
algorithm with stagnation detection, which is faster than the fast ~EA
by a factor polynomial in on \textsc{Jump}_k, is slower by a factor
polynomial in on some \textsc{Jump}_{k,\delta} instances.
Computationally, the new class allows experiments with wider fitness valleys,
especially when they lie further away from the global optimum.Comment: Extended version of a paper that appeared in the proceedings of GECCO
2021. To appear in Algorithmic
- …