644 research outputs found
Probabilistic Tools for the Analysis of Randomized Optimization Heuristics
This chapter collects several probabilistic tools that proved to be useful in
the analysis of randomized search heuristics. This includes classic material
like Markov, Chebyshev and Chernoff inequalities, but also lesser known topics
like stochastic domination and coupling or Chernoff bounds for geometrically
distributed random variables and for negatively correlated random variables.
Most of the results presented here have appeared previously, some, however,
only in recent conference publications. While the focus is on collecting tools
for the analysis of randomized search heuristics, many of these may be useful
as well in the analysis of classic randomized algorithms or discrete random
structures.Comment: 91 page
Upper Bounds on the Runtime of the Univariate Marginal Distribution Algorithm on OneMax
A runtime analysis of the Univariate Marginal Distribution Algorithm (UMDA)
is presented on the OneMax function for wide ranges of its parameters and
. If for some constant and
, a general bound on the expected runtime
is obtained. This bound crucially assumes that all marginal probabilities of
the algorithm are confined to the interval . If for a constant and , the
behavior of the algorithm changes and the bound on the expected runtime becomes
, which typically even holds if the borders on the marginal
probabilities are omitted.
The results supplement the recently derived lower bound
by Krejca and Witt (FOGA 2017) and turn out as
tight for the two very different values and . They also improve the previously best known upper bound by Dang and Lehre (GECCO 2015).Comment: Version 4: added illustrations and experiments; improved presentation
in Section 2.2; to appear in Algorithmica; the final publication is available
at Springer via http://dx.doi.org/10.1007/s00453-018-0463-
Runtime Analysis for Self-adaptive Mutation Rates
We propose and analyze a self-adaptive version of the
evolutionary algorithm in which the current mutation rate is part of the
individual and thus also subject to mutation. A rigorous runtime analysis on
the OneMax benchmark function reveals that a simple local mutation scheme for
the rate leads to an expected optimization time (number of fitness evaluations)
of when is at least for
some constant . For all values of , this
performance is asymptotically best possible among all -parallel
mutation-based unbiased black-box algorithms.
Our result shows that self-adaptation in evolutionary computation can find
complex optimal parameter settings on the fly. At the same time, it proves that
a relatively complicated self-adjusting scheme for the mutation rate proposed
by Doerr, Gie{\ss}en, Witt, and Yang~(GECCO~2017) can be replaced by our simple
endogenous scheme.
On the technical side, the paper contributes new tools for the analysis of
two-dimensional drift processes arising in the analysis of dynamic parameter
choices in EAs, including bounds on occupation probabilities in processes with
non-constant drift
An Exponential Lower Bound for the Runtime of the cGA on Jump Functions
In the first runtime analysis of an estimation-of-distribution algorithm
(EDA) on the multi-modal jump function class, Hasen\"ohrl and Sutton (GECCO
2018) proved that the runtime of the compact genetic algorithm with suitable
parameter choice on jump functions with high probability is at most polynomial
(in the dimension) if the jump size is at most logarithmic (in the dimension),
and is at most exponential in the jump size if the jump size is
super-logarithmic. The exponential runtime guarantee was achieved with a
hypothetical population size that is also exponential in the jump size.
Consequently, this setting cannot lead to a better runtime.
In this work, we show that any choice of the hypothetical population size
leads to a runtime that, with high probability, is at least exponential in the
jump size. This result might be the first non-trivial exponential lower bound
for EDAs that holds for arbitrary parameter settings.Comment: To appear in the Proceedings of FOGA 2019. arXiv admin note: text
overlap with arXiv:1903.1098
Comma Selection Outperforms Plus Selection on OneMax with Randomly Planted Optima
It is an ongoing debate whether and how comma selection in evolutionary
algorithms helps to escape local optima. We propose a new benchmark function to
investigate the benefits of comma selection: OneMax with randomly planted local
optima, generated by frozen noise. We show that comma selection (the
EA) is faster than plus selection (the EA) on this
benchmark, in a fixed-target scenario, and for offspring population sizes
for which both algorithms behave differently. For certain parameters,
the EA finds the target in evaluations, with
high probability (w.h.p.), while the EA) w.h.p. requires almost
evaluations.
We further show that the advantage of comma selection is not arbitrarily
large: w.h.p. comma selection outperforms plus selection at most by a factor of
for most reasonable parameter choices. We develop novel methods
for analysing frozen noise and give powerful and general fixed-target results
with tail bounds that are of independent interest.Comment: An extended abstract will be published at GECCO 202
- …