6 research outputs found
A Tight Runtime Analysis for the cGA on Jump Functions---EDAs Can Cross Fitness Valleys at No Extra Cost
We prove that the compact genetic algorithm (cGA) with hypothetical
population size with high
probability finds the optimum of any -dimensional jump function with jump
size in iterations. Since it is known
that the cGA with high probability needs at least iterations to optimize the unimodal OneMax function, our result shows that
the cGA in contrast to most classic evolutionary algorithms here is able to
cross moderate-sized valleys of low fitness at no extra cost.
Our runtime guarantee improves over the recent upper bound valid for of Hasen\"ohrl and
Sutton (GECCO 2018). For the best choice of the hypothetical population size,
this result gives a runtime guarantee of , whereas ours
gives .
We also provide a simple general method based on parallel runs that, under
mild conditions, (i)~overcomes the need to specify a suitable population size,
but gives a performance close to the one stemming from the best-possible
population size, and (ii)~transforms EDAs with high-probability performance
guarantees into EDAs with similar bounds on the expected runtime.Comment: 25 pages, full version of a paper to appear at GECCO 201
From Understanding Genetic Drift to a Smart-Restart Parameter-less Compact Genetic Algorithm
One of the key difficulties in using estimation-of-distribution algorithms is
choosing the population size(s) appropriately: Too small values lead to genetic
drift, which can cause enormous difficulties. In the regime with no genetic
drift, however, often the runtime is roughly proportional to the population
size, which renders large population sizes inefficient.
Based on a recent quantitative analysis which population sizes lead to
genetic drift, we propose a parameter-less version of the compact genetic
algorithm that automatically finds a suitable population size without spending
too much time in situations unfavorable due to genetic drift.
We prove a mathematical runtime guarantee for this algorithm and conduct an
extensive experimental analysis on four classic benchmark problems both without
and with additive centered Gaussian posterior noise. The former shows that
under a natural assumption, our algorithm has a performance very similar to the
one obtainable from the best problem-specific population size. The latter
confirms that missing the right population size in the original cGA can be
detrimental and that previous theory-based suggestions for the population size
can be far away from the right values; it also shows that our algorithm as well
as a previously proposed parameter-less variant of the cGA based on parallel
runs avoid such pitfalls. Comparing the two parameter-less approaches, ours
profits from its ability to abort runs which are likely to be stuck in a
genetic drift situation.Comment: 4 figures. Extended version of a paper appearing at GECCO 202
From Understanding Genetic Drift to a Smart-Restart Mechanism for Estimation-of-Distribution Algorithms
Estimation-of-distribution algorithms (EDAs) are optimization algorithms that
learn a distribution on the search space from which good solutions can be
sampled easily. A key parameter of most EDAs is the sample size (population
size). If the population size is too small, the update of the probabilistic
model builds on few samples, leading to the undesired effect of genetic drift.
Too large population sizes avoid genetic drift, but slow down the process.
Building on a recent quantitative analysis of how the population size leads
to genetic drift, we design a smart-restart mechanism for EDAs. By stopping
runs when the risk for genetic drift is high, it automatically runs the EDA in
good parameter regimes.
Via a mathematical runtime analysis, we prove a general performance guarantee
for this smart-restart scheme. This in particular shows that in many situations
where the optimal (problem-specific) parameter values are known, the restart
scheme automatically finds these, leading to the asymptotically optimal
performance.
We also conduct an extensive experimental analysis. On four classic benchmark
problems, we clearly observe the critical influence of the population size on
the performance, and we find that the smart-restart scheme leads to a
performance close to the one obtainable with optimal parameter values. Our
results also show that previous theory-based suggestions for the optimal
population size can be far from the optimal ones, leading to a performance
clearly inferior to the one obtained via the smart-restart scheme. We also
conduct experiments with PBIL (cross-entropy algorithm) on two combinatorial
optimization problems from the literature, the max-cut problem and the
bipartition problem. Again, we observe that the smart-restart mechanism finds
much better values for the population size than those suggested in the
literature, leading to a much better performance.Comment: Accepted for publication in "Journal of Machine Learning Research".
Extended version of our GECCO 2020 paper. This article supersedes
arXiv:2004.0714
On the limitations of the univariate marginal distribution algorithm to deception and where bivariate EDAs might help
We introduce a new benchmark problem called Deceptive Leading Blocks (DLB) to
rigorously study the runtime of the Univariate Marginal Distribution Algorithm
(UMDA) in the presence of epistasis and deception. We show that simple
Evolutionary Algorithms (EAs) outperform the UMDA unless the selective pressure
is extremely high, where and are the parent and
offspring population sizes, respectively. More precisely, we show that the UMDA
with a parent population size of has an expected runtime
of on the DLB problem assuming any selective pressure
, as opposed to the expected runtime
of for the non-elitist
with . These results illustrate
inherent limitations of univariate EDAs against deception and epistasis, which
are common characteristics of real-world problems. In contrast, empirical
evidence reveals the efficiency of the bi-variate MIMIC algorithm on the DLB
problem. Our results suggest that one should consider EDAs with more complex
probabilistic models when optimising problems with some degree of epistasis and
deception.Comment: To appear in the 15th ACM/SIGEVO Workshop on Foundations of Genetic
Algorithms (FOGA XV), Potsdam, German
Self-Adjusting Evolutionary Algorithms for Multimodal Optimization
Recent theoretical research has shown that self-adjusting and self-adaptive
mechanisms can provably outperform static settings in evolutionary algorithms
for binary search spaces. However, the vast majority of these studies focuses
on unimodal functions which do not require the algorithm to flip several bits
simultaneously to make progress. In fact, existing self-adjusting algorithms
are not designed to detect local optima and do not have any obvious benefit to
cross large Hamming gaps.
We suggest a mechanism called stagnation detection that can be added as a
module to existing evolutionary algorithms (both with and without prior
self-adjusting algorithms). Added to a simple (1+1) EA, we prove an expected
runtime on the well-known Jump benchmark that corresponds to an asymptotically
optimal parameter setting and outperforms other mechanisms for multimodal
optimization like heavy-tailed mutation. We also investigate the module in the
context of a self-adjusting (1+) EA and show that it combines the
previous benefits of this algorithm on unimodal problems with more efficient
multimodal optimization.
To explore the limitations of the approach, we additionally present an
example where both self-adjusting mechanisms, including stagnation detection,
do not help to find a beneficial setting of the mutation rate. Finally, we
investigate our module for stagnation detection experimentally.Comment: 26 pages. Full version of a paper appearing at GECCO 202