2,029 research outputs found
Analysis of Noisy Evolutionary Optimization When Sampling Fails
In noisy evolutionary optimization, sampling is a common strategy to deal
with noise. By the sampling strategy, the fitness of a solution is evaluated
multiple times (called \emph{sample size}) independently, and its true fitness
is then approximated by the average of these evaluations. Previous studies on
sampling are mainly empirical. In this paper, we first investigate the effect
of sample size from a theoretical perspective. By analyzing the (1+1)-EA on the
noisy LeadingOnes problem, we show that as the sample size increases, the
running time can reduce from exponential to polynomial, but then return to
exponential. This suggests that a proper sample size is crucial in practice.
Then, we investigate what strategies can work when sampling with any fixed
sample size fails. By two illustrative examples, we prove that using parent or
offspring populations can be better. Finally, we construct an artificial noisy
example to show that when using neither sampling nor populations is effective,
adaptive sampling (i.e., sampling with an adaptive sample size) can work. This,
for the first time, provides a theoretical support for the use of adaptive
sampling
An Exponential Lower Bound for the Runtime of the cGA on Jump Functions
In the first runtime analysis of an estimation-of-distribution algorithm
(EDA) on the multi-modal jump function class, Hasen\"ohrl and Sutton (GECCO
2018) proved that the runtime of the compact genetic algorithm with suitable
parameter choice on jump functions with high probability is at most polynomial
(in the dimension) if the jump size is at most logarithmic (in the dimension),
and is at most exponential in the jump size if the jump size is
super-logarithmic. The exponential runtime guarantee was achieved with a
hypothetical population size that is also exponential in the jump size.
Consequently, this setting cannot lead to a better runtime.
In this work, we show that any choice of the hypothetical population size
leads to a runtime that, with high probability, is at least exponential in the
jump size. This result might be the first non-trivial exponential lower bound
for EDAs that holds for arbitrary parameter settings.Comment: To appear in the Proceedings of FOGA 2019. arXiv admin note: text
overlap with arXiv:1903.1098
First Steps Towards a Runtime Comparison of Natural and Artificial Evolution
Evolutionary algorithms (EAs) form a popular optimisation paradigm inspired
by natural evolution. In recent years the field of evolutionary computation has
developed a rigorous analytical theory to analyse their runtime on many
illustrative problems. Here we apply this theory to a simple model of natural
evolution. In the Strong Selection Weak Mutation (SSWM) evolutionary regime the
time between occurrence of new mutations is much longer than the time it takes
for a new beneficial mutation to take over the population. In this situation,
the population only contains copies of one genotype and evolution can be
modelled as a (1+1)-type process where the probability of accepting a new
genotype (improvements or worsenings) depends on the change in fitness.
We present an initial runtime analysis of SSWM, quantifying its performance
for various parameters and investigating differences to the (1+1)EA. We show
that SSWM can have a moderate advantage over the (1+1)EA at crossing fitness
valleys and study an example where SSWM outperforms the (1+1)EA by taking
advantage of information on the fitness gradient
OneMax in Black-Box Models with Several Restrictions
Black-box complexity studies lower bounds for the efficiency of
general-purpose black-box optimization algorithms such as evolutionary
algorithms and other search heuristics. Different models exist, each one being
designed to analyze a different aspect of typical heuristics such as the memory
size or the variation operators in use. While most of the previous works focus
on one particular such aspect, we consider in this work how the combination of
several algorithmic restrictions influence the black-box complexity. Our
testbed are so-called OneMax functions, a classical set of test functions that
is intimately related to classic coin-weighing problems and to the board game
Mastermind.
We analyze in particular the combined memory-restricted ranking-based
black-box complexity of OneMax for different memory sizes. While its isolated
memory-restricted as well as its ranking-based black-box complexity for bit
strings of length is only of order , the combined model does not
allow for algorithms being faster than linear in , as can be seen by
standard information-theoretic considerations. We show that this linear bound
is indeed asymptotically tight. Similar results are obtained for other memory-
and offspring-sizes. Our results also apply to the (Monte Carlo) complexity of
OneMax in the recently introduced elitist model, in which only the best-so-far
solution can be kept in the memory. Finally, we also provide improved lower
bounds for the complexity of OneMax in the regarded models.
Our result enlivens the quest for natural evolutionary algorithms optimizing
OneMax in iterations.Comment: This is the full version of a paper accepted to GECCO 201
- …