672 research outputs found

    The (1+(λ,λ)) Genetic Algorithm on the Vertex Cover Problem:Crossover Helps Leaving Plateaus

    Get PDF

    Towards a Stronger Theory for Permutation-based Evolutionary Algorithms

    Full text link
    While the theoretical analysis of evolutionary algorithms (EAs) has made significant progress for pseudo-Boolean optimization problems in the last 25 years, only sporadic theoretical results exist on how EAs solve permutation-based problems. To overcome the lack of permutation-based benchmark problems, we propose a general way to transfer the classic pseudo-Boolean benchmarks into benchmarks defined on sets of permutations. We then conduct a rigorous runtime analysis of the permutation-based (1+1)(1+1) EA proposed by Scharnow, Tinnefeld, and Wegener (2004) on the analogues of the \textsc{LeadingOnes} and \textsc{Jump} benchmarks. The latter shows that, different from bit-strings, it is not only the Hamming distance that determines how difficult it is to mutate a permutation σ\sigma into another one τ\tau, but also the precise cycle structure of στ1\sigma \tau^{-1}. For this reason, we also regard the more symmetric scramble mutation operator. We observe that it not only leads to simpler proofs, but also reduces the runtime on jump functions with odd jump size by a factor of Θ(n)\Theta(n). Finally, we show that a heavy-tailed version of the scramble operator, as in the bit-string case, leads to a speed-up of order mΘ(m)m^{\Theta(m)} on jump functions with jump size~mm.%Comment: To appear in the proceedings of GECCO 2022. This version contains the proofs omitted in the proceedings version for reasons of spac

    An Extended Jump Functions Benchmark for the Analysis of Randomized Search Heuristics

    Full text link
    Jump functions are the {most-studied} non-unimodal benchmark in the theory of randomized search heuristics, in particular, evolutionary algorithms (EAs). They have significantly improved our understanding of how EAs escape from local optima. However, their particular structure -- to leave the local optimum one can only jump directly to the global optimum -- raises the question of how representative such results are. For this reason, we propose an extended class \textsc{Jump}_{k,\delta} of jump functions that contain a valley of low fitness of width δ\delta starting at distance kk from the global optimum. We prove that several previous results extend to this more general class: for all {kn1/3lnnk \le \frac{n^{1/3}}{\ln{n}}} and δ<k\delta < k, the optimal mutation rate for the (1+1)(1+1)~EA is δn\frac{\delta}{n}, and the fast (1+1)(1+1)~EA runs faster than the classical (1+1)(1+1)~EA by a factor super-exponential in δ\delta. However, we also observe that some known results do not generalize: the randomized local search algorithm with stagnation detection, which is faster than the fast (1+1)(1+1)~EA by a factor polynomial in kk on \textsc{Jump}_k, is slower by a factor polynomial in nn on some \textsc{Jump}_{k,\delta} instances. Computationally, the new class allows experiments with wider fitness valleys, especially when they lie further away from the global optimum.Comment: Extended version of a paper that appeared in the proceedings of GECCO 2021. To appear in Algorithmic
    corecore