1,650 research outputs found

    An Exponential Lower Bound for the Runtime of the cGA on Jump Functions

    Full text link
    In the first runtime analysis of an estimation-of-distribution algorithm (EDA) on the multi-modal jump function class, Hasen\"ohrl and Sutton (GECCO 2018) proved that the runtime of the compact genetic algorithm with suitable parameter choice on jump functions with high probability is at most polynomial (in the dimension) if the jump size is at most logarithmic (in the dimension), and is at most exponential in the jump size if the jump size is super-logarithmic. The exponential runtime guarantee was achieved with a hypothetical population size that is also exponential in the jump size. Consequently, this setting cannot lead to a better runtime. In this work, we show that any choice of the hypothetical population size leads to a runtime that, with high probability, is at least exponential in the jump size. This result might be the first non-trivial exponential lower bound for EDAs that holds for arbitrary parameter settings.Comment: To appear in the Proceedings of FOGA 2019. arXiv admin note: text overlap with arXiv:1903.1098

    Fourier Analysis Meets Runtime Analysis: Precise Runtimes on Plateaus

    Full text link
    We propose a new method based on discrete Fourier analysis to analyze the time evolutionary algorithms spend on plateaus. This immediately gives a concise proof of the classic estimate of the expected runtime of the (1+1)(1+1) evolutionary algorithm on the Needle problem due to Garnier, Kallel, and Schoenauer (1999). We also use this method to analyze the runtime of the (1+1)(1+1) evolutionary algorithm on a new benchmark consisting of n/n/\ell plateaus of effective size 212^\ell-1 which have to be optimized sequentially in a LeadingOnes fashion. Using our new method, we determine the precise expected runtime both for static and fitness-dependent mutation rates. We also determine the asymptotically optimal static and fitness-dependent mutation rates. For =o(n)\ell = o(n), the optimal static mutation rate is approximately 1.59/n1.59/n. The optimal fitness dependent mutation rate, when the first kk fitness-relevant bits have been found, is asymptotically 1/(k+1)1/(k+1). These results, so far only proven for the single-instance problem LeadingOnes, are thus true in a much broader respect. We expect similar extensions to be true for other important results on LeadingOnes. We are also optimistic that our Fourier analysis approach can be applied to other plateau problems as well.Comment: 40 page

    Runtime Analysis of the (1+(λ,λ))(1+(\lambda,\lambda)) Genetic Algorithm on Random Satisfiable 3-CNF Formulas

    Full text link
    The (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm, first proposed at GECCO 2013, showed a surprisingly good performance on so me optimization problems. The theoretical analysis so far was restricted to the OneMax test function, where this GA profited from the perfect fitness-distance correlation. In this work, we conduct a rigorous runtime analysis of this GA on random 3-SAT instances in the planted solution model having at least logarithmic average degree, which are known to have a weaker fitness distance correlation. We prove that this GA with fixed not too large population size again obtains runtimes better than Θ(nlogn)\Theta(n \log n), which is a lower bound for most evolutionary algorithms on pseudo-Boolean problems with unique optimum. However, the self-adjusting version of the GA risks reaching population sizes at which the intermediate selection of the GA, due to the weaker fitness-distance correlation, is not able to distinguish a profitable offspring from others. We show that this problem can be overcome by equipping the self-adjusting GA with an upper limit for the population size. Apart from sparse instances, this limit can be chosen in a way that the asymptotic performance does not worsen compared to the idealistic OneMax case. Overall, this work shows that the (1+(λ,λ))(1+(\lambda,\lambda)) GA can provably have a good performance on combinatorial search and optimization problems also in the presence of a weaker fitness-distance correlation.Comment: An extended abstract of this report will appear in the proceedings of the 2017 Genetic and Evolutionary Computation Conference (GECCO 2017

    A Tight Runtime Analysis for the cGA on Jump Functions---EDAs Can Cross Fitness Valleys at No Extra Cost

    Full text link
    We prove that the compact genetic algorithm (cGA) with hypothetical population size μ=Ω(nlogn)poly(n)\mu = \Omega(\sqrt n \log n) \cap \text{poly}(n) with high probability finds the optimum of any nn-dimensional jump function with jump size k<120lnnk < \frac 1 {20} \ln n in O(μn)O(\mu \sqrt n) iterations. Since it is known that the cGA with high probability needs at least Ω(μn+nlogn)\Omega(\mu \sqrt n + n \log n) iterations to optimize the unimodal OneMax function, our result shows that the cGA in contrast to most classic evolutionary algorithms here is able to cross moderate-sized valleys of low fitness at no extra cost. Our runtime guarantee improves over the recent upper bound O(μn1.5logn)O(\mu n^{1.5} \log n) valid for μ=Ω(n3.5+ε)\mu = \Omega(n^{3.5+\varepsilon}) of Hasen\"ohrl and Sutton (GECCO 2018). For the best choice of the hypothetical population size, this result gives a runtime guarantee of O(n5+ε)O(n^{5+\varepsilon}), whereas ours gives O(nlogn)O(n \log n). We also provide a simple general method based on parallel runs that, under mild conditions, (i)~overcomes the need to specify a suitable population size, but gives a performance close to the one stemming from the best-possible population size, and (ii)~transforms EDAs with high-probability performance guarantees into EDAs with similar bounds on the expected runtime.Comment: 25 pages, full version of a paper to appear at GECCO 201

    Probabilistic Tools for the Analysis of Randomized Optimization Heuristics

    Full text link
    This chapter collects several probabilistic tools that proved to be useful in the analysis of randomized search heuristics. This includes classic material like Markov, Chebyshev and Chernoff inequalities, but also lesser known topics like stochastic domination and coupling or Chernoff bounds for geometrically distributed random variables and for negatively correlated random variables. Most of the results presented here have appeared previously, some, however, only in recent conference publications. While the focus is on collecting tools for the analysis of randomized search heuristics, many of these may be useful as well in the analysis of classic randomized algorithms or discrete random structures.Comment: 91 page

    Upper Bounds on the Runtime of the Univariate Marginal Distribution Algorithm on OneMax

    Full text link
    A runtime analysis of the Univariate Marginal Distribution Algorithm (UMDA) is presented on the OneMax function for wide ranges of its parameters μ\mu and λ\lambda. If μclogn\mu\ge c\log n for some constant c>0c>0 and λ=(1+Θ(1))μ\lambda=(1+\Theta(1))\mu, a general bound O(μn)O(\mu n) on the expected runtime is obtained. This bound crucially assumes that all marginal probabilities of the algorithm are confined to the interval [1/n,11/n][1/n,1-1/n]. If μcnlogn\mu\ge c' \sqrt{n}\log n for a constant c>0c'>0 and λ=(1+Θ(1))μ\lambda=(1+\Theta(1))\mu, the behavior of the algorithm changes and the bound on the expected runtime becomes O(μn)O(\mu\sqrt{n}), which typically even holds if the borders on the marginal probabilities are omitted. The results supplement the recently derived lower bound Ω(μn+nlogn)\Omega(\mu\sqrt{n}+n\log n) by Krejca and Witt (FOGA 2017) and turn out as tight for the two very different values μ=clogn\mu=c\log n and μ=cnlogn\mu=c'\sqrt{n}\log n. They also improve the previously best known upper bound O(nlognloglogn)O(n\log n\log\log n) by Dang and Lehre (GECCO 2015).Comment: Version 4: added illustrations and experiments; improved presentation in Section 2.2; to appear in Algorithmica; the final publication is available at Springer via http://dx.doi.org/10.1007/s00453-018-0463-
    corecore