11 research outputs found
On the Runtime of Randomized Local Search and Simple Evolutionary Algorithms for Dynamic Makespan Scheduling
Evolutionary algorithms have been frequently used for dynamic optimization
problems. With this paper, we contribute to the theoretical understanding of
this research area. We present the first computational complexity analysis of
evolutionary algorithms for a dynamic variant of a classical combinatorial
optimization problem, namely makespan scheduling. We study the model of a
strong adversary which is allowed to change one job at regular intervals.
Furthermore, we investigate the setting of random changes. Our results show
that randomized local search and a simple evolutionary algorithm are very
effective in dynamically tracking changes made to the problem instance.Comment: Conference version appears at IJCAI 201
When Does Hillclimbing Fail on Monotone Functions: An entropy compression argument
Hillclimbing is an essential part of any optimization algorithm. An important
benchmark for hillclimbing algorithms on pseudo-Boolean functions are (strictly) montone functions, on which a surprising number
of hillclimbers fail to be efficient. For example, the -Evolutionary
Algorithm is a standard hillclimber which flips each bit independently with
probability in each round. Perhaps surprisingly, this algorithm shows a
phase transition: it optimizes any monotone pseudo-boolean function in
quasilinear time if , but there are monotone functions for which the
algorithm needs exponential time if . But so far it was unclear whether
the threshold is at .
In this paper we show how Moser's entropy compression argument can be adapted
to this situation, that is, we show that a long runtime would allow us to
encode the random steps of the algorithm with less bits than their entropy.
Thus there exists a such that for all the
-Evolutionary Algorithm with rate finds the optimum in steps in expectation.Comment: 14 pages, no figure
Hardest Monotone Functions for Evolutionary Algorithms
The study of hardest and easiest fitness landscapes is an active area of
research. Recently, Kaufmann, Larcher, Lengler and Zou conjectured that for the
self-adjusting -EA, Adversarial Dynamic BinVal (ADBV) is the
hardest dynamic monotone function to optimize. We introduce the function
Switching Dynamic BinVal (SDBV) which coincides with ADBV whenever the number
of remaining zeros in the search point is strictly less than , where
denotes the dimension of the search space. We show, using a combinatorial
argument, that for the -EA with any mutation rate , SDBV is
drift-minimizing among the class of dynamic monotone functions. Our
construction provides the first explicit example of an instance of the
partially-ordered evolutionary algorithm (PO-EA) model with parameterized
pessimism introduced by Colin, Doerr and F\'erey, building on work of Jansen.
We further show that the -EA optimizes SDBV in
generations. Our simulations demonstrate matching runtimes for both static and
self-adjusting and -EA. We further show, using an
example of fixed dimension, that drift-minimization does not equal maximal
runtime
OneMax is not the Easiest Function for Fitness Improvements
We study the success rule for controlling the population size of
the -EA. It was shown by Hevia Fajardo and Sudholt that this
parameter control mechanism can run into problems for large if the fitness
landscape is too easy. They conjectured that this problem is worst for the
OneMax benchmark, since in some well-established sense OneMax is known to be
the easiest fitness landscape. In this paper we disprove this conjecture and
show that OneMax is not the easiest fitness landscape with respect to finding
improving steps.
As a consequence, we show that there exists and such that
the self-adjusting -EA with -rule optimizes OneMax
efficiently when started with zero-bits, but does not find the
optimum in polynomial time on Dynamic BinVal. Hence, we show that there are
landscapes where the problem of the -rule for controlling the
population size of the -EA is more severe than for OneMax
Self-adjusting Population Sizes for the -EA on Monotone Functions
We study the -EA with mutation rate for , where
the population size is adaptively controlled with the -success rule.
Recently, Hevia Fajardo and Sudholt have shown that this setup with is
efficient on \onemax for , but inefficient if . Surprisingly,
the hardest part is not close to the optimum, but rather at linear distance. We
show that this behavior is not specific to \onemax. If is small, then the
algorithm is efficient on all monotone functions, and if is large, then it
needs superpolynomial time on all monotone functions. In the former case, for
we show a upper bound for the number of generations and for the number of function evaluations, and for we show
generations and evaluations. We also show formally that
optimization is always fast, regardless of , if the algorithm starts in
proximity of the optimum. All results also hold in a dynamic environment where
the fitness function changes in each generation
A Computational View on Natural Evolution: On the Rigorous Analysis of the Speed of Adaptation
Inspired by Darwin’s ideas, Turing (1948) proposed an evolutionary search as an automated problem solving approach. Mimicking natural evolution, evolutionary algorithms evolve a set of solutions through the repeated application of the evolutionary operators (mutation, recombination and selection). Evolutionary algorithms belong to the family of black box algorithms which are general purpose optimisation tools. They are typically used when no
good specific algorithm is known for the problem at hand and they have been reported to be surprisingly effective (Eiben and Smith, 2015; Sarker et al., 2002).
Interestingly, although evolutionary algorithms are heavily inspired by natural evolution, their study has deviated from the study of evolution by the population genetics community.
We believe that this is a missed opportunity and that both fields can benefit from an interdisciplinary collaboration. The question of how long it takes for a natural population to evolve complex adaptations has fascinated researchers for decades. We will argue that this is an equivalent research question to the runtime analysis of algorithms.
By making use of the methods and techniques used in both fields, we will derive plenty of meaningful results for both communities, proving that this interdisciplinary approach is
effective and relevant. We will apply the tools used in the theoretical analysis of evolutionary algorithms to quantify the complexity of adaptive walks on many landscapes, illustrating how the structure of the fitness landscape and the parameter conditions can impose limits to adaptation. Furthermore, as geneticists use diffusion theory to track the change in the allele frequencies of a population, we will develop a brand new model to analyse the dynamics of
evolutionary algorithms. Our model, based on stochastic differential equations, will allow to describe not only the expected behaviour, but also to measure how much the process might deviate from that expectation
Automatic software generation and improvement through search based techniques
Writing software is a difficult and expensive task. Its automation is hence very valuable. Search algorithms have been successfully used to tackle many software engineering problems. Unfortunately, for some problems the traditional techniques have been of only limited scope, and search algorithms have not been used yet. We hence propose a novel framework that is based on a co-evolution of programs and test cases to tackle these difficult problems. This framework can be used to tackle software engineering tasks such as Automatic Refinement, Fault Correction and Improving Non-functional Criteria. These tasks are very difficult, and their automation in literature has been limited. To get a better understanding of how search algorithms work, there is the need of a theoretical foundation. That would help to get better insight of search based software engineering. We provide first theoretical analyses for search based software testing, which is one of the main components of our co-evolutionary framework. This thesis gives the important contribution of presenting a novel framework, and we then study its application to three difficult software engineering problems. In this thesis we also give the important contribution of defining a first theoretical foundation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Theoretical foundations of artificial immune systems
Artificial immune systems (AIS) are a special class of biologically inspired algorithms, which are based on the immune system of vertebrates. The field constitutes a relatively new and emerging area of research in Computational Intelligence that has achieved various promising results in different areas of application, e.g., learning, classification, anomaly detection, and (function) optimization. An increasing and often stated problem of the field is the lack of a theoretical basis for AIS as most work so far only concentrated on the direct application of immune principles.
In this thesis, we concentrate on optimization applications of AIS. It can easily be recognized that with respect to this application area, the work done previously mainly covers convergence analysis. To the best of our knowledge this thesis constitutes the first rigorous runtime analyses of immune-inspired operators and thus adds substantially to the demanded theoretical foundation of AIS. We consider two very common aspects of AIS. On one hand, we provide a theoretical analysis for different hypermutation operators frequently employed in AIS. On the other hand, we examine a popular diversity mechanism named aging. We compare our findings with corresponding results from the analysis of other nature-inspired randomized search heuristics, in particular evolutionary algorithms. Moreover, we focus on the practical implications of our theoretical results in order to bridge the gap between theory and practice. Therefore, we derive guidelines for parameter settings and point out typical situations where certain concepts seem promising. These analyses contribute to the understanding of how AIS actually work and in which applications they excel other randomized search heuristics