2,639 research outputs found
The Univariate Marginal Distribution Algorithm Copes Well With Deception and Epistasis
In their recent work, Lehre and Nguyen (FOGA 2019) show that the univariate
marginal distribution algorithm (UMDA) needs time exponential in the parent
populations size to optimize the DeceptiveLeadingBlocks (DLB) problem. They
conclude from this result that univariate EDAs have difficulties with deception
and epistasis.
In this work, we show that this negative finding is caused by an unfortunate
choice of the parameters of the UMDA. When the population sizes are chosen
large enough to prevent genetic drift, then the UMDA optimizes the DLB problem
with high probability with at most fitness
evaluations. Since an offspring population size of order
can prevent genetic drift, the UMDA can solve the DLB problem with fitness evaluations. In contrast, for classic evolutionary algorithms no
better run time guarantee than is known (which we prove to be tight
for the EA), so our result rather suggests that the UMDA can cope
well with deception and epistatis.
From a broader perspective, our result shows that the UMDA can cope better
with local optima than evolutionary algorithms; such a result was previously
known only for the compact genetic algorithm. Together with the lower bound of
Lehre and Nguyen, our result for the first time rigorously proves that running
EDAs in the regime with genetic drift can lead to drastic performance losses
Improved Runtime Bounds for the Univariate Marginal Distribution Algorithm via Anti-Concentration
Unlike traditional evolutionary algorithms which produce offspring via
genetic operators, Estimation of Distribution Algorithms (EDAs) sample
solutions from probabilistic models which are learned from selected
individuals. It is hoped that EDAs may improve optimisation performance on
epistatic fitness landscapes by learning variable interactions. However, hardly
any rigorous results are available to support claims about the performance of
EDAs, even for fitness functions without epistasis. The expected runtime of the
Univariate Marginal Distribution Algorithm (UMDA) on OneMax was recently shown
to be in by Dang and Lehre
(GECCO 2015). Later, Krejca and Witt (FOGA 2017) proved the lower bound
via an involved drift analysis.
We prove a bound, given some restrictions
on the population size. This implies the tight bound when , matching the runtime
of classical EAs. Our analysis uses the level-based theorem and
anti-concentration properties of the Poisson-Binomial distribution. We expect
that these generic methods will facilitate further analysis of EDAs.Comment: 19 pages, 1 figur
Upper Bounds on the Runtime of the Univariate Marginal Distribution Algorithm on OneMax
A runtime analysis of the Univariate Marginal Distribution Algorithm (UMDA)
is presented on the OneMax function for wide ranges of its parameters and
. If for some constant and
, a general bound on the expected runtime
is obtained. This bound crucially assumes that all marginal probabilities of
the algorithm are confined to the interval . If for a constant and , the
behavior of the algorithm changes and the bound on the expected runtime becomes
, which typically even holds if the borders on the marginal
probabilities are omitted.
The results supplement the recently derived lower bound
by Krejca and Witt (FOGA 2017) and turn out as
tight for the two very different values and . They also improve the previously best known upper bound by Dang and Lehre (GECCO 2015).Comment: Version 4: added illustrations and experiments; improved presentation
in Section 2.2; to appear in Algorithmica; the final publication is available
at Springer via http://dx.doi.org/10.1007/s00453-018-0463-
On the limitations of the univariate marginal distribution algorithm to deception and where bivariate EDAs might help
We introduce a new benchmark problem called Deceptive Leading Blocks (DLB) to
rigorously study the runtime of the Univariate Marginal Distribution Algorithm
(UMDA) in the presence of epistasis and deception. We show that simple
Evolutionary Algorithms (EAs) outperform the UMDA unless the selective pressure
is extremely high, where and are the parent and
offspring population sizes, respectively. More precisely, we show that the UMDA
with a parent population size of has an expected runtime
of on the DLB problem assuming any selective pressure
, as opposed to the expected runtime
of for the non-elitist
with . These results illustrate
inherent limitations of univariate EDAs against deception and epistasis, which
are common characteristics of real-world problems. In contrast, empirical
evidence reveals the efficiency of the bi-variate MIMIC algorithm on the DLB
problem. Our results suggest that one should consider EDAs with more complex
probabilistic models when optimising problems with some degree of epistasis and
deception.Comment: To appear in the 15th ACM/SIGEVO Workshop on Foundations of Genetic
Algorithms (FOGA XV), Potsdam, German
Runtime analysis of the univariate marginal distribution algorithm under low selective pressure and prior noise
We perform a rigorous runtime analysis for the Univariate Marginal
Distribution Algorithm on the LeadingOnes function, a well-known benchmark
function in the theory community of evolutionary computation with a high
correlation between decision variables. For a problem instance of size , the
currently best known upper bound on the expected runtime is
(Dang and Lehre, GECCO 2015), while a
lower bound necessary to understand how the algorithm copes with variable
dependencies is still missing. Motivated by this, we show that the algorithm
requires a runtime with high probability and in expectation
if the selective pressure is low; otherwise, we obtain a lower bound of
on the expected runtime.
Furthermore, we for the first time consider the algorithm on the function under
a prior noise model and obtain an expected runtime for the
optimal parameter settings. In the end, our theoretical results are accompanied
by empirical findings, not only matching with rigorous analyses but also
providing new insights into the behaviour of the algorithm.Comment: To appear at GECCO 2019, Prague, Czech Republi
From Understanding Genetic Drift to a Smart-Restart Mechanism for Estimation-of-Distribution Algorithms
Estimation-of-distribution algorithms (EDAs) are optimization algorithms that
learn a distribution on the search space from which good solutions can be
sampled easily. A key parameter of most EDAs is the sample size (population
size). If the population size is too small, the update of the probabilistic
model builds on few samples, leading to the undesired effect of genetic drift.
Too large population sizes avoid genetic drift, but slow down the process.
Building on a recent quantitative analysis of how the population size leads
to genetic drift, we design a smart-restart mechanism for EDAs. By stopping
runs when the risk for genetic drift is high, it automatically runs the EDA in
good parameter regimes.
Via a mathematical runtime analysis, we prove a general performance guarantee
for this smart-restart scheme. This in particular shows that in many situations
where the optimal (problem-specific) parameter values are known, the restart
scheme automatically finds these, leading to the asymptotically optimal
performance.
We also conduct an extensive experimental analysis. On four classic benchmark
problems, we clearly observe the critical influence of the population size on
the performance, and we find that the smart-restart scheme leads to a
performance close to the one obtainable with optimal parameter values. Our
results also show that previous theory-based suggestions for the optimal
population size can be far from the optimal ones, leading to a performance
clearly inferior to the one obtained via the smart-restart scheme. We also
conduct experiments with PBIL (cross-entropy algorithm) on two combinatorial
optimization problems from the literature, the max-cut problem and the
bipartition problem. Again, we observe that the smart-restart mechanism finds
much better values for the population size than those suggested in the
literature, leading to a much better performance.Comment: Accepted for publication in "Journal of Machine Learning Research".
Extended version of our GECCO 2020 paper. This article supersedes
arXiv:2004.0714
Coherence freeze in an optical lattice investigated via pump-probe spectroscopy
Motivated by our observation of fast echo decay and a surprising coherence
freeze, we have developed a pump-probe spectroscopy technique for vibrational
states of ultracold Rb atoms in an optical lattice to gain information
about the memory dynamics of the system. We use pump-probe spectroscopy to
monitor the time-dependent changes of frequencies experienced by atoms and to
characterize the probability distribution of these frequency trajectories. We
show that the inferred distribution, unlike a naive microscopic model of the
lattice, correctly predicts the main features of the observed echo decay.Comment: 4 pages, 5 figure
- …