12 research outputs found
Improved Runtime Bounds for the Univariate Marginal Distribution Algorithm via Anti-Concentration
Unlike traditional evolutionary algorithms which produce offspring via
genetic operators, Estimation of Distribution Algorithms (EDAs) sample
solutions from probabilistic models which are learned from selected
individuals. It is hoped that EDAs may improve optimisation performance on
epistatic fitness landscapes by learning variable interactions. However, hardly
any rigorous results are available to support claims about the performance of
EDAs, even for fitness functions without epistasis. The expected runtime of the
Univariate Marginal Distribution Algorithm (UMDA) on OneMax was recently shown
to be in by Dang and Lehre
(GECCO 2015). Later, Krejca and Witt (FOGA 2017) proved the lower bound
via an involved drift analysis.
We prove a bound, given some restrictions
on the population size. This implies the tight bound when , matching the runtime
of classical EAs. Our analysis uses the level-based theorem and
anti-concentration properties of the Poisson-Binomial distribution. We expect
that these generic methods will facilitate further analysis of EDAs.Comment: 19 pages, 1 figur
Upper Bounds on the Runtime of the Univariate Marginal Distribution Algorithm on OneMax
A runtime analysis of the Univariate Marginal Distribution Algorithm (UMDA)
is presented on the OneMax function for wide ranges of its parameters and
. If for some constant and
, a general bound on the expected runtime
is obtained. This bound crucially assumes that all marginal probabilities of
the algorithm are confined to the interval . If for a constant and , the
behavior of the algorithm changes and the bound on the expected runtime becomes
, which typically even holds if the borders on the marginal
probabilities are omitted.
The results supplement the recently derived lower bound
by Krejca and Witt (FOGA 2017) and turn out as
tight for the two very different values and . They also improve the previously best known upper bound by Dang and Lehre (GECCO 2015).Comment: Version 4: added illustrations and experiments; improved presentation
in Section 2.2; to appear in Algorithmica; the final publication is available
at Springer via http://dx.doi.org/10.1007/s00453-018-0463-
From Understanding Genetic Drift to a Smart-Restart Parameter-less Compact Genetic Algorithm
One of the key difficulties in using estimation-of-distribution algorithms is
choosing the population size(s) appropriately: Too small values lead to genetic
drift, which can cause enormous difficulties. In the regime with no genetic
drift, however, often the runtime is roughly proportional to the population
size, which renders large population sizes inefficient.
Based on a recent quantitative analysis which population sizes lead to
genetic drift, we propose a parameter-less version of the compact genetic
algorithm that automatically finds a suitable population size without spending
too much time in situations unfavorable due to genetic drift.
We prove a mathematical runtime guarantee for this algorithm and conduct an
extensive experimental analysis on four classic benchmark problems both without
and with additive centered Gaussian posterior noise. The former shows that
under a natural assumption, our algorithm has a performance very similar to the
one obtainable from the best problem-specific population size. The latter
confirms that missing the right population size in the original cGA can be
detrimental and that previous theory-based suggestions for the population size
can be far away from the right values; it also shows that our algorithm as well
as a previously proposed parameter-less variant of the cGA based on parallel
runs avoid such pitfalls. Comparing the two parameter-less approaches, ours
profits from its ability to abort runs which are likely to be stuck in a
genetic drift situation.Comment: 4 figures. Extended version of a paper appearing at GECCO 202
Runtime analysis of the univariate marginal distribution algorithm under low selective pressure and prior noise
We perform a rigorous runtime analysis for the Univariate Marginal
Distribution Algorithm on the LeadingOnes function, a well-known benchmark
function in the theory community of evolutionary computation with a high
correlation between decision variables. For a problem instance of size , the
currently best known upper bound on the expected runtime is
(Dang and Lehre, GECCO 2015), while a
lower bound necessary to understand how the algorithm copes with variable
dependencies is still missing. Motivated by this, we show that the algorithm
requires a runtime with high probability and in expectation
if the selective pressure is low; otherwise, we obtain a lower bound of
on the expected runtime.
Furthermore, we for the first time consider the algorithm on the function under
a prior noise model and obtain an expected runtime for the
optimal parameter settings. In the end, our theoretical results are accompanied
by empirical findings, not only matching with rigorous analyses but also
providing new insights into the behaviour of the algorithm.Comment: To appear at GECCO 2019, Prague, Czech Republi
On the limitations of the univariate marginal distribution algorithm to deception and where bivariate EDAs might help
We introduce a new benchmark problem called Deceptive Leading Blocks (DLB) to
rigorously study the runtime of the Univariate Marginal Distribution Algorithm
(UMDA) in the presence of epistasis and deception. We show that simple
Evolutionary Algorithms (EAs) outperform the UMDA unless the selective pressure
is extremely high, where and are the parent and
offspring population sizes, respectively. More precisely, we show that the UMDA
with a parent population size of has an expected runtime
of on the DLB problem assuming any selective pressure
, as opposed to the expected runtime
of for the non-elitist
with . These results illustrate
inherent limitations of univariate EDAs against deception and epistasis, which
are common characteristics of real-world problems. In contrast, empirical
evidence reveals the efficiency of the bi-variate MIMIC algorithm on the DLB
problem. Our results suggest that one should consider EDAs with more complex
probabilistic models when optimising problems with some degree of epistasis and
deception.Comment: To appear in the 15th ACM/SIGEVO Workshop on Foundations of Genetic
Algorithms (FOGA XV), Potsdam, German
Level-Based Analysis of the Univariate Marginal Distribution Algorithm
Estimation of Distribution Algorithms (EDAs) are stochastic heuristics that
search for optimal solutions by learning and sampling from probabilistic
models. Despite their popularity in real-world applications, there is little
rigorous understanding of their performance. Even for the Univariate Marginal
Distribution Algorithm (UMDA) -- a simple population-based EDA assuming
independence between decision variables -- the optimisation time on the linear
problem OneMax was until recently undetermined. The incomplete theoretical
understanding of EDAs is mainly due to lack of appropriate analytical tools.
We show that the recently developed level-based theorem for non-elitist
populations combined with anti-concentration results yield upper bounds on the
expected optimisation time of the UMDA. This approach results in the bound
on two problems, LeadingOnes and
BinVal, for population sizes , where and
are parameters of the algorithm. We also prove that the UMDA with
population sizes optimises
OneMax in expected time , and for larger population
sizes , in expected time
. The facility and generality of our arguments
suggest that this is a promising approach to derive bounds on the expected
optimisation time of EDAs.Comment: To appear in Algorithmica Journa
Significance-based Estimation-of-Distribution Algorithms
Estimation-of-distribution algorithms (EDAs) are randomized search heuristics
that maintain a probabilistic model of the solution space. This model is
updated from iteration to iteration, based on the quality of the solutions
sampled according to the model. As previous works show, this short-term
perspective can lead to erratic updates of the model, in particular, to
bit-frequencies approaching a random boundary value. Such frequencies take long
to be moved back to the middle range, leading to significant performance
losses.
In order to overcome this problem, we propose a new EDA based on the classic
compact genetic algorithm (cGA) that takes into account a longer history of
samples and updates its model only with respect to information which it
classifies as statistically significant. We prove that this significance-based
compact genetic algorithm (sig-cGA) optimizes the commonly regarded benchmark
functions OneMax, LeadingOnes, and BinVal all in time, a result
shown for no other EDA or evolutionary algorithm so far.
For the recently proposed scGA -- an EDA that tries to prevent erratic model
updates by imposing a bias to the uniformly distributed model -- we prove that
it optimizes OneMax only in a time exponential in the hypothetical population
size . Similarly, we show that the convex search algorithm cannot
optimize OneMax in polynomial time
On the Runtime Dynamics of the Univariate Marginal Distribution Algorithm on Jump Functions
University of Minnesota M.S. thesis. May 2018. Major: Computer Science. Advisor: Andrew Sutton. 1 computer file (PDF); vi, 79 pages.Solving jump functions by using traditional evolutionary algorithms (EAs) seems to be a challenging task. Mutation only EAs have a hard time flipping the right number of bits to generate the optimum. To optimize a jump function, an algorithm must be able to execute an initial hill-climbing phase, after which a point across a large gap must be generated. We study a family of EAs called estimation of distribution algorithms (EDAs) which works differently than standard EAs. In EDAs, we do not store the actual bitstrings, but rather a probability distribution that is initially uniform and should evolve to a model that always generates the global optimum. We study an EDA called Univariate Marginal Distribution Algorithm (UMDA) and analyze it on jump functions with gap k. We show experimental work on runtimes and probability of succeeding to solve the jump function for different values of k. We take an innovative approach and modify the UMDA by turning off selection. For this new algorithm we present a formal analyses in which, if certain conditions are met, we prove an upper bound on generating the optimum all 1s bistring. Lastly, we compare our results with a different EDA called the compact Genetic Algorithm (cGA) analyzing the jump function. We mention pros and cons of both algorithms under different scenarios