6 research outputs found
Level-Based Analysis of the Population-Based Incremental Learning Algorithm
The Population-Based Incremental Learning (PBIL) algorithm uses a convex
combination of the current model and the empirical model to construct the next
model, which is then sampled to generate offspring. The Univariate Marginal
Distribution Algorithm (UMDA) is a special case of the PBIL, where the current
model is ignored. Dang and Lehre (GECCO 2015) showed that UMDA can optimise
LeadingOnes efficiently. The question still remained open if the PBIL performs
equally well. Here, by applying the level-based theorem in addition to
Dvoretzky--Kiefer--Wolfowitz inequality, we show that the PBIL optimises
function LeadingOnes in expected time for a population size , which matches the bound
of the UMDA. Finally, we show that the result carries over to BinVal, giving
the fist runtime result for the PBIL on the BinVal problem.Comment: To appea
Runtime analysis of the univariate marginal distribution algorithm under low selective pressure and prior noise
We perform a rigorous runtime analysis for the Univariate Marginal
Distribution Algorithm on the LeadingOnes function, a well-known benchmark
function in the theory community of evolutionary computation with a high
correlation between decision variables. For a problem instance of size , the
currently best known upper bound on the expected runtime is
(Dang and Lehre, GECCO 2015), while a
lower bound necessary to understand how the algorithm copes with variable
dependencies is still missing. Motivated by this, we show that the algorithm
requires a runtime with high probability and in expectation
if the selective pressure is low; otherwise, we obtain a lower bound of
on the expected runtime.
Furthermore, we for the first time consider the algorithm on the function under
a prior noise model and obtain an expected runtime for the
optimal parameter settings. In the end, our theoretical results are accompanied
by empirical findings, not only matching with rigorous analyses but also
providing new insights into the behaviour of the algorithm.Comment: To appear at GECCO 2019, Prague, Czech Republi
On the limitations of the univariate marginal distribution algorithm to deception and where bivariate EDAs might help
We introduce a new benchmark problem called Deceptive Leading Blocks (DLB) to
rigorously study the runtime of the Univariate Marginal Distribution Algorithm
(UMDA) in the presence of epistasis and deception. We show that simple
Evolutionary Algorithms (EAs) outperform the UMDA unless the selective pressure
is extremely high, where and are the parent and
offspring population sizes, respectively. More precisely, we show that the UMDA
with a parent population size of has an expected runtime
of on the DLB problem assuming any selective pressure
, as opposed to the expected runtime
of for the non-elitist
with . These results illustrate
inherent limitations of univariate EDAs against deception and epistasis, which
are common characteristics of real-world problems. In contrast, empirical
evidence reveals the efficiency of the bi-variate MIMIC algorithm on the DLB
problem. Our results suggest that one should consider EDAs with more complex
probabilistic models when optimising problems with some degree of epistasis and
deception.Comment: To appear in the 15th ACM/SIGEVO Workshop on Foundations of Genetic
Algorithms (FOGA XV), Potsdam, German
Significance-based Estimation-of-Distribution Algorithms
Estimation-of-distribution algorithms (EDAs) are randomized search heuristics
that maintain a probabilistic model of the solution space. This model is
updated from iteration to iteration, based on the quality of the solutions
sampled according to the model. As previous works show, this short-term
perspective can lead to erratic updates of the model, in particular, to
bit-frequencies approaching a random boundary value. Such frequencies take long
to be moved back to the middle range, leading to significant performance
losses.
In order to overcome this problem, we propose a new EDA based on the classic
compact genetic algorithm (cGA) that takes into account a longer history of
samples and updates its model only with respect to information which it
classifies as statistically significant. We prove that this significance-based
compact genetic algorithm (sig-cGA) optimizes the commonly regarded benchmark
functions OneMax, LeadingOnes, and BinVal all in time, a result
shown for no other EDA or evolutionary algorithm so far.
For the recently proposed scGA -- an EDA that tries to prevent erratic model
updates by imposing a bias to the uniformly distributed model -- we prove that
it optimizes OneMax only in a time exponential in the hypothetical population
size . Similarly, we show that the convex search algorithm cannot
optimize OneMax in polynomial time