6 research outputs found
The Sampling-and-Learning Framework: A Statistical View of Evolutionary Algorithms
Evolutionary algorithms (EAs), a large class of general purpose optimization
algorithms inspired from the natural phenomena, are widely used in various
industrial optimizations and often show excellent performance. This paper
presents an attempt towards revealing their general power from a statistical
view of EAs. By summarizing a large range of EAs into the sampling-and-learning
framework, we show that the framework directly admits a general analysis on the
probable-absolute-approximate (PAA) query complexity. We particularly focus on
the framework with the learning subroutine being restricted as a binary
classification, which results in the sampling-and-classification (SAC)
algorithms. With the help of the learning theory, we obtain a general upper
bound on the PAA query complexity of SAC algorithms. We further compare SAC
algorithms with the uniform search in different situations. Under the
error-target independence condition, we show that SAC algorithms can achieve
polynomial speedup to the uniform search, but not super-polynomial speedup.
Under the one-side-error condition, we show that super-polynomial speedup can
be achieved. This work only touches the surface of the framework. Its power
under other conditions is still open
Better Runtime Guarantees Via Stochastic Domination
Apart from few exceptions, the mathematical runtime analysis of evolutionary
algorithms is mostly concerned with expected runtimes. In this work, we argue
that stochastic domination is a notion that should be used more frequently in
this area. Stochastic domination allows to formulate much more informative
performance guarantees, it allows to decouple the algorithm analysis into the
true algorithmic part of detecting a domination statement and the
probability-theoretical part of deriving the desired probabilistic guarantees
from this statement, and it helps finding simpler and more natural proofs. As
particular results, we prove a fitness level theorem which shows that the
runtime is dominated by a sum of independent geometric random variables, we
prove the first tail bounds for several classic runtime problems, and we give a
short and natural proof for Witt's result that the runtime of any
mutation-based algorithm on any function with unique optimum is subdominated by
the runtime of a variant of the \oea on the \onemax function. As side-products,
we determine the fastest unbiased (1+1) algorithm for the \leadingones
benchmark problem, both in the general case and when restricted to static
mutation operators, and we prove a Chernoff-type tail bound for sums of
independent coupon collector distributions.Comment: Significantly extended version of a paper that appeared in the
proceedings of EvoCOP 201
Probabilistic Tools for the Analysis of Randomized Optimization Heuristics
This chapter collects several probabilistic tools that proved to be useful in
the analysis of randomized search heuristics. This includes classic material
like Markov, Chebyshev and Chernoff inequalities, but also lesser known topics
like stochastic domination and coupling or Chernoff bounds for geometrically
distributed random variables and for negatively correlated random variables.
Most of the results presented here have appeared previously, some, however,
only in recent conference publications. While the focus is on collecting tools
for the analysis of randomized search heuristics, many of these may be useful
as well in the analysis of classic randomized algorithms or discrete random
structures.Comment: 91 page