13 research outputs found
Black-Box Complexity of the Binary Value Function
The binary value function, or BinVal, has appeared in several studies in
theory of evolutionary computation as one of the extreme examples of linear
pseudo-Boolean functions. Its unbiased black-box complexity was previously
shown to be at most , where is the problem
size. We augment it with an upper bound of ,
which is more precise for many values of . We also present a lower bound of
. Additionally, we prove that BinVal is an easiest
function among all unimodal pseudo-Boolean functions at least for unbiased
algorithms.Comment: 24 pages, one figure. An extended two-page abstract of this work will
appear in proceedings of the Genetic and Evolutionary Computation Conference,
GECCO'1
Reducing the Arity in Unbiased Black-Box Complexity
We show that for all the -ary unbiased black-box
complexity of the -dimensional \onemax function class is . This
indicates that the power of higher arity operators is much stronger than what
the previous bound by Doerr et al. (Faster black-box algorithms
through higher arity operators, Proc. of FOGA 2011, pp. 163--172, ACM, 2011)
suggests.
The key to this result is an encoding strategy, which might be of independent
interest. We show that, using -ary unbiased variation operators only, we may
simulate an unrestricted memory of size bits.Comment: An extended abstract of this paper has been accepted for inclusion in
the proceedings of the Genetic and Evolutionary Computation Conference (GECCO
2012
The Sampling-and-Learning Framework: A Statistical View of Evolutionary Algorithms
Evolutionary algorithms (EAs), a large class of general purpose optimization
algorithms inspired from the natural phenomena, are widely used in various
industrial optimizations and often show excellent performance. This paper
presents an attempt towards revealing their general power from a statistical
view of EAs. By summarizing a large range of EAs into the sampling-and-learning
framework, we show that the framework directly admits a general analysis on the
probable-absolute-approximate (PAA) query complexity. We particularly focus on
the framework with the learning subroutine being restricted as a binary
classification, which results in the sampling-and-classification (SAC)
algorithms. With the help of the learning theory, we obtain a general upper
bound on the PAA query complexity of SAC algorithms. We further compare SAC
algorithms with the uniform search in different situations. Under the
error-target independence condition, we show that SAC algorithms can achieve
polynomial speedup to the uniform search, but not super-polynomial speedup.
Under the one-side-error condition, we show that super-polynomial speedup can
be achieved. This work only touches the surface of the framework. Its power
under other conditions is still open
Benchmarking a Genetic Algorithm with Configurable Crossover Probability
We investigate a family of Genetic Algorithms (GAs) which
creates offspring either from mutation or by recombining two randomly chosen
parents. By scaling the crossover probability, we can thus interpolate from a
fully mutation-only algorithm towards a fully crossover-based GA. We analyze,
by empirical means, how the performance depends on the interplay of population
size and the crossover probability.
Our comparison on 25 pseudo-Boolean optimization problems reveals an
advantage of crossover-based configurations on several easy optimization tasks,
whereas the picture for more complex optimization problems is rather mixed.
Moreover, we observe that the ``fast'' mutation scheme with its are power-law
distributed mutation strengths outperforms standard bit mutation on complex
optimization tasks when it is combined with crossover, but performs worse in
the absence of crossover.
We then take a closer look at the surprisingly good performance of the
crossover-based GAs on the well-known LeadingOnes benchmark
problem. We observe that the optimal crossover probability increases with
increasing population size . At the same time, it decreases with
increasing problem dimension, indicating that the advantages of the crossover
are not visible in the asymptotic view classically applied in runtime analysis.
We therefore argue that a mathematical investigation for fixed dimensions might
help us observe effects which are not visible when focusing exclusively on
asymptotic performance bounds
More Effective Crossover Operators for the All-pairs Shortest Path Problem
The All-Pairs Shortest Path problem is the first non-artificial problem for which it was shown that adding crossover can significantly speed up a mutation-only evolutionary algorithm. Recently, the analysis of this algorithm was refined and it was shown to have an expected optimization time of . In this work, we study two variants of the algorithm. These are based on two central concepts in recombination, \emph{repair mechanisms} and \emph{parent selection}. We show that repairing infeasible offspring leads to an improved expected optimization time of . Furthermore, we prove that choosing parents that guarantee feasible offspring results in an optimization time of
More effective crossover operators for the all-pairs shortest path problem
The all-pairs shortest path problem is the first non-artificial problem for which it was shown that adding crossover can significantly speed up a mutation-only evolutionary algorithm. Recently, the analysis of this algorithm was refined and it was shown to have an expected optimization time (w. r. t. the number of fitness evaluations) of Θ(n3.25( logn)0.25). In contrast to this simple algorithm, evolutionary algorithms used in practice usually employ refined recombination strategies in order to avoid the creation of infeasible offspring. We study extensions of the basic algorithm by two such concepts which are central in recombination, namely repair mechanisms and parent selection. We show that repairing infeasible offspring leads to an improved expected optimization time of O( n3.2(logn)0.2). As a second part of our study we prove that choosing parents that guarantee feasible offspring results in an optimization time of O(n3logn). Both results show that already simple adjustments of the recombination operator can asymptotically improve the runtime of evolutionary algorithms.© 2012 Elsevier B.V. All rights reserved.Benjamin Doerr, Daniel Johannsen, Timo Kötzing, Frank Neumann, Madeleine Theil