15 research outputs found

    Optimizing Monotone Functions Can Be Difficult

    No full text
    Extending previous analyses on function classes like linear functions, we analyze how the simple (1+1) evolutionary algorithm optimizes pseudo-Boolean functions that are strictly monotone. Contrary to what one would expect, not all of these functions are easy to optimize. The choice of the constant cc in the mutation probability p(n)=c/np(n) = c/n can make a decisive difference. We show that if c < 1, then the \EA finds the optimum of every such function in Θ(nlogn)\Theta(n \log n) iterations. For c=1c=1, we can still prove an upper bound of O(n3/2)O(n^{3/2}). However, for c>33c > 33, we present a strictly monotone function such that the \EA with overwhelming probability does not find the optimum within 2Ω(n)2^{\Omega(n)} iterations. This is the first time that we observe that a constant factor change of the mutation probability changes the run-time by more than constant factors

    Lower Bounds for Non-Elitist Evolutionary Algorithms via Negative Multiplicative Drift

    No full text
    International audienceA decent number of lower bounds for non-elitist population-based evolutionary algorithms has been shown by now. Most of them are technically demanding due to the (hard to avoid) use of negative drift theorems -- general results which translate an expected movement away from the target into a high hitting time. We propose a simple negative drift theorem for multiplicative drift scenarios and show that it can simplify existing analyses. We discuss in more detail Lehre's (PPSN 2010) \emph{negative drift in populations} method, one of the most general tools to prove lower bounds on the runtime of non-elitist mutation-based evolutionary algorithms for discrete search spaces. Together with other arguments, we obtain an alternative and simpler proof of this result, which also strengthens and simplifies this method. In particular, now only three of the five technical conditions of the previous result have to be verified. The lower bounds we obtain are explicit instead of only asymptotic. This allows to compute concrete lower bounds for concrete algorithms, but also enables us to show that super-polynomial runtimes appear already when the reproduction rate is only a (1ω(n1/2))(1 - \omega(n^{-1/2})) factor below the threshold. For the special case of algorithms using standard bit mutation with a random mutation rate (called uniform mixing in the language of hyper-heuristics), we prove the result stated by Dang and Lehre (PPSN 2016) and extend it to mutation rates other than Θ(1/n)\Theta(1/n), which includes the heavy-tailed mutation operator proposed by Doerr, Le, Makhmara, and Nguyen (GECCO 2017). We finally use our method and a novel domination argument to show an exponential lower bound for the runtime of the mutation-only simple genetic algorithm on \onemax for arbitrary population size
    corecore