5,332 research outputs found

    Unbiased Black-Box Complexities of Jump Functions

    Full text link
    We analyze the unbiased black-box complexity of jump functions with small, medium, and large sizes of the fitness plateau surrounding the optimal solution. Among other results, we show that when the jump size is (1/2ε)n(1/2 - \varepsilon)n, that is, only a small constant fraction of the fitness values is visible, then the unbiased black-box complexities for arities 33 and higher are of the same order as those for the simple \textsc{OneMax} function. Even for the extreme jump function, in which all but the two fitness values n/2n/2 and nn are blanked out, polynomial-time mutation-based (i.e., unary unbiased) black-box optimization algorithms exist. This is quite surprising given that for the extreme jump function almost the whole search space (all but a Θ(n1/2)\Theta(n^{-1/2}) fraction) is a plateau of constant fitness. To prove these results, we introduce new tools for the analysis of unbiased black-box complexities, for example, selecting the new parent individual not by comparing the fitnesses of the competing search points, but also by taking into account the (empirical) expected fitnesses of their offspring.Comment: This paper is based on results presented in the conference versions [GECCO 2011] and [GECCO 2014

    Reducing the Arity in Unbiased Black-Box Complexity

    Full text link
    We show that for all 1<klogn1<k \leq \log n the kk-ary unbiased black-box complexity of the nn-dimensional \onemax function class is O(n/k)O(n/k). This indicates that the power of higher arity operators is much stronger than what the previous O(n/logk)O(n/\log k) bound by Doerr et al. (Faster black-box algorithms through higher arity operators, Proc. of FOGA 2011, pp. 163--172, ACM, 2011) suggests. The key to this result is an encoding strategy, which might be of independent interest. We show that, using kk-ary unbiased variation operators only, we may simulate an unrestricted memory of size O(2k)O(2^k) bits.Comment: An extended abstract of this paper has been accepted for inclusion in the proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2012

    Complexity Theory for Discrete Black-Box Optimization Heuristics

    Full text link
    A predominant topic in the theory of evolutionary algorithms and, more generally, theory of randomized black-box optimization techniques is running time analysis. Running time analysis aims at understanding the performance of a given heuristic on a given problem by bounding the number of function evaluations that are needed by the heuristic to identify a solution of a desired quality. As in general algorithms theory, this running time perspective is most useful when it is complemented by a meaningful complexity theory that studies the limits of algorithmic solutions. In the context of discrete black-box optimization, several black-box complexity models have been developed to analyze the best possible performance that a black-box optimization algorithm can achieve on a given problem. The models differ in the classes of algorithms to which these lower bounds apply. This way, black-box complexity contributes to a better understanding of how certain algorithmic choices (such as the amount of memory used by a heuristic, its selective pressure, or properties of the strategies that it uses to create new solution candidates) influences performance. In this chapter we review the different black-box complexity models that have been proposed in the literature, survey the bounds that have been obtained for these models, and discuss how the interplay of running time analysis and black-box complexity can inspire new algorithmic solutions to well-researched problems in evolutionary computation. We also discuss in this chapter several interesting open questions for future work.Comment: This survey article is to appear (in a slightly modified form) in the book "Theory of Randomized Search Heuristics in Discrete Search Spaces", which will be published by Springer in 2018. The book is edited by Benjamin Doerr and Frank Neumann. Missing numbers of pointers to other chapters of this book will be added as soon as possibl

    Better Fixed-Arity Unbiased Black-Box Algorithms

    Full text link
    In their GECCO'12 paper, Doerr and Doerr proved that the kk-ary unbiased black-box complexity of OneMax on nn bits is O(n/k)O(n/k) for 2kO(logn)2\le k\le O(\log n). We propose an alternative strategy for achieving this unbiased black-box complexity when 3klog2n3\le k\le\log_2 n. While it is based on the same idea of block-wise optimization, it uses kk-ary unbiased operators in a different way. For each block of size 2k112^{k-1}-1 we set up, in O(k)O(k) queries, a virtual coordinate system, which enables us to use an arbitrary unrestricted algorithm to optimize this block. This is possible because this coordinate system introduces a bijection between unrestricted queries and a subset of kk-ary unbiased operators. We note that this technique does not depend on OneMax being solved and can be used in more general contexts. This together constitutes an algorithm which is conceptually simpler than the one by Doerr and Doerr, and at the same time achieves better constant factors in the asymptotic notation. Our algorithm works in (2+o(1))n/(k1)(2+o(1))\cdot n/(k-1), where o(1)o(1) relates to kk. Our experimental evaluation of this algorithm shows its efficiency already for 3k63\le k\le6.Comment: An extended abstract will appear at GECCO'1

    Better Fixed-Arity Unbiased Black-Box Algorithms

    Full text link
    In their GECCO'12 paper, Doerr and Doerr proved that the kk-ary unbiased black-box complexity of OneMax on nn bits is O(n/k)O(n/k) for 2kO(logn)2\le k\le O(\log n). We propose an alternative strategy for achieving this unbiased black-box complexity when 3klog2n3\le k\le\log_2 n. While it is based on the same idea of block-wise optimization, it uses kk-ary unbiased operators in a different way. For each block of size 2k112^{k-1}-1 we set up, in O(k)O(k) queries, a virtual coordinate system, which enables us to use an arbitrary unrestricted algorithm to optimize this block. This is possible because this coordinate system introduces a bijection between unrestricted queries and a subset of kk-ary unbiased operators. We note that this technique does not depend on OneMax being solved and can be used in more general contexts. This together constitutes an algorithm which is conceptually simpler than the one by Doerr and Doerr, and at the same time achieves better constant factors in the asymptotic notation. Our algorithm works in (2+o(1))n/(k1)(2+o(1))\cdot n/(k-1), where o(1)o(1) relates to kk. Our experimental evaluation of this algorithm shows its efficiency already for 3k63\le k\le6.Comment: An extended abstract will appear at GECCO'1

    The Right Mutation Strength for Multi-Valued Decision Variables

    Full text link
    The most common representation in evolutionary computation are bit strings. This is ideal to model binary decision variables, but less useful for variables taking more values. With very little theoretical work existing on how to use evolutionary algorithms for such optimization problems, we study the run time of simple evolutionary algorithms on some OneMax-like functions defined over Ω={0,1,,r1}n\Omega = \{0, 1, \dots, r-1\}^n. More precisely, we regard a variety of problem classes requesting the component-wise minimization of the distance to an unknown target vector zΩz \in \Omega. For such problems we see a crucial difference in how we extend the standard-bit mutation operator to these multi-valued domains. While it is natural to select each position of the solution vector to be changed independently with probability 1/n1/n, there are various ways to then change such a position. If we change each selected position to a random value different from the original one, we obtain an expected run time of Θ(nrlogn)\Theta(nr \log n). If we change each selected position by either +1+1 or 1-1 (random choice), the optimization time reduces to Θ(nr+nlogn)\Theta(nr + n\log n). If we use a random mutation strength i{0,1,,r1}ni \in \{0,1,\ldots,r-1\}^n with probability inversely proportional to ii and change the selected position by either +i+i or i-i (random choice), then the optimization time becomes Θ(nlog(r)(log(n)+log(r)))\Theta(n \log(r)(\log(n)+\log(r))), bringing down the dependence on rr from linear to polylogarithmic. One of our results depends on a new variant of the lower bounding multiplicative drift theorem.Comment: an extended abstract of this work is to appear at GECCO 201
    corecore