11 research outputs found

    On the Runtime of Randomized Local Search and Simple Evolutionary Algorithms for Dynamic Makespan Scheduling

    Get PDF
    Evolutionary algorithms have been frequently used for dynamic optimization problems. With this paper, we contribute to the theoretical understanding of this research area. We present the first computational complexity analysis of evolutionary algorithms for a dynamic variant of a classical combinatorial optimization problem, namely makespan scheduling. We study the model of a strong adversary which is allowed to change one job at regular intervals. Furthermore, we investigate the setting of random changes. Our results show that randomized local search and a simple evolutionary algorithm are very effective in dynamically tracking changes made to the problem instance.Comment: Conference version appears at IJCAI 201

    When Does Hillclimbing Fail on Monotone Functions: An entropy compression argument

    Full text link
    Hillclimbing is an essential part of any optimization algorithm. An important benchmark for hillclimbing algorithms on pseudo-Boolean functions f:{0,1}nRf: \{0,1\}^n \to \mathbb{R} are (strictly) montone functions, on which a surprising number of hillclimbers fail to be efficient. For example, the (1+1)(1+1)-Evolutionary Algorithm is a standard hillclimber which flips each bit independently with probability c/nc/n in each round. Perhaps surprisingly, this algorithm shows a phase transition: it optimizes any monotone pseudo-boolean function in quasilinear time if c<1c<1, but there are monotone functions for which the algorithm needs exponential time if c>2.2c>2.2. But so far it was unclear whether the threshold is at c=1c=1. In this paper we show how Moser's entropy compression argument can be adapted to this situation, that is, we show that a long runtime would allow us to encode the random steps of the algorithm with less bits than their entropy. Thus there exists a c0>1c_0 > 1 such that for all 0<cc00<c\le c_0 the (1+1)(1+1)-Evolutionary Algorithm with rate c/nc/n finds the optimum in O(nlog2n)O(n \log^2 n) steps in expectation.Comment: 14 pages, no figure

    Hardest Monotone Functions for Evolutionary Algorithms

    Full text link
    The study of hardest and easiest fitness landscapes is an active area of research. Recently, Kaufmann, Larcher, Lengler and Zou conjectured that for the self-adjusting (1,λ)(1,\lambda)-EA, Adversarial Dynamic BinVal (ADBV) is the hardest dynamic monotone function to optimize. We introduce the function Switching Dynamic BinVal (SDBV) which coincides with ADBV whenever the number of remaining zeros in the search point is strictly less than n/2n/2, where nn denotes the dimension of the search space. We show, using a combinatorial argument, that for the (1+1)(1+1)-EA with any mutation rate p[0,1]p \in [0,1], SDBV is drift-minimizing among the class of dynamic monotone functions. Our construction provides the first explicit example of an instance of the partially-ordered evolutionary algorithm (PO-EA) model with parameterized pessimism introduced by Colin, Doerr and F\'erey, building on work of Jansen. We further show that the (1+1)(1+1)-EA optimizes SDBV in Θ(n3/2)\Theta(n^{3/2}) generations. Our simulations demonstrate matching runtimes for both static and self-adjusting (1,λ)(1,\lambda) and (1+λ)(1+\lambda)-EA. We further show, using an example of fixed dimension, that drift-minimization does not equal maximal runtime

    OneMax is not the Easiest Function for Fitness Improvements

    Full text link
    We study the (1:s+1)(1:s+1) success rule for controlling the population size of the (1,λ)(1,\lambda)-EA. It was shown by Hevia Fajardo and Sudholt that this parameter control mechanism can run into problems for large ss if the fitness landscape is too easy. They conjectured that this problem is worst for the OneMax benchmark, since in some well-established sense OneMax is known to be the easiest fitness landscape. In this paper we disprove this conjecture and show that OneMax is not the easiest fitness landscape with respect to finding improving steps. As a consequence, we show that there exists ss and ε\varepsilon such that the self-adjusting (1,λ)(1,\lambda)-EA with (1:s+1)(1:s+1)-rule optimizes OneMax efficiently when started with εn\varepsilon n zero-bits, but does not find the optimum in polynomial time on Dynamic BinVal. Hence, we show that there are landscapes where the problem of the (1:s+1)(1:s+1)-rule for controlling the population size of the (1,λ)(1, \lambda)-EA is more severe than for OneMax

    Self-adjusting Population Sizes for the (1,λ)(1, \lambda)-EA on Monotone Functions

    Full text link
    We study the (1,λ)(1,\lambda)-EA with mutation rate c/nc/n for c1c\le 1, where the population size is adaptively controlled with the (1:s+1)(1:s+1)-success rule. Recently, Hevia Fajardo and Sudholt have shown that this setup with c=1c=1 is efficient on \onemax for s<1s<1, but inefficient if s18s \ge 18. Surprisingly, the hardest part is not close to the optimum, but rather at linear distance. We show that this behavior is not specific to \onemax. If ss is small, then the algorithm is efficient on all monotone functions, and if ss is large, then it needs superpolynomial time on all monotone functions. In the former case, for c<1c<1 we show a O(n)O(n) upper bound for the number of generations and O(nlogn)O(n\log n) for the number of function evaluations, and for c=1c=1 we show O(nlogn)O(n\log n) generations and O(n2loglogn)O(n^2\log\log n) evaluations. We also show formally that optimization is always fast, regardless of ss, if the algorithm starts in proximity of the optimum. All results also hold in a dynamic environment where the fitness function changes in each generation

    A Computational View on Natural Evolution: On the Rigorous Analysis of the Speed of Adaptation

    Get PDF
    Inspired by Darwin’s ideas, Turing (1948) proposed an evolutionary search as an automated problem solving approach. Mimicking natural evolution, evolutionary algorithms evolve a set of solutions through the repeated application of the evolutionary operators (mutation, recombination and selection). Evolutionary algorithms belong to the family of black box algorithms which are general purpose optimisation tools. They are typically used when no good specific algorithm is known for the problem at hand and they have been reported to be surprisingly effective (Eiben and Smith, 2015; Sarker et al., 2002). Interestingly, although evolutionary algorithms are heavily inspired by natural evolution, their study has deviated from the study of evolution by the population genetics community. We believe that this is a missed opportunity and that both fields can benefit from an interdisciplinary collaboration. The question of how long it takes for a natural population to evolve complex adaptations has fascinated researchers for decades. We will argue that this is an equivalent research question to the runtime analysis of algorithms. By making use of the methods and techniques used in both fields, we will derive plenty of meaningful results for both communities, proving that this interdisciplinary approach is effective and relevant. We will apply the tools used in the theoretical analysis of evolutionary algorithms to quantify the complexity of adaptive walks on many landscapes, illustrating how the structure of the fitness landscape and the parameter conditions can impose limits to adaptation. Furthermore, as geneticists use diffusion theory to track the change in the allele frequencies of a population, we will develop a brand new model to analyse the dynamics of evolutionary algorithms. Our model, based on stochastic differential equations, will allow to describe not only the expected behaviour, but also to measure how much the process might deviate from that expectation

    Automatic software generation and improvement through search based techniques

    Get PDF
    Writing software is a difficult and expensive task. Its automation is hence very valuable. Search algorithms have been successfully used to tackle many software engineering problems. Unfortunately, for some problems the traditional techniques have been of only limited scope, and search algorithms have not been used yet. We hence propose a novel framework that is based on a co-evolution of programs and test cases to tackle these difficult problems. This framework can be used to tackle software engineering tasks such as Automatic Refinement, Fault Correction and Improving Non-functional Criteria. These tasks are very difficult, and their automation in literature has been limited. To get a better understanding of how search algorithms work, there is the need of a theoretical foundation. That would help to get better insight of search based software engineering. We provide first theoretical analyses for search based software testing, which is one of the main components of our co-evolutionary framework. This thesis gives the important contribution of presenting a novel framework, and we then study its application to three difficult software engineering problems. In this thesis we also give the important contribution of defining a first theoretical foundation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Theoretical foundations of artificial immune systems

    Get PDF
    Artificial immune systems (AIS) are a special class of biologically inspired algorithms, which are based on the immune system of vertebrates. The field constitutes a relatively new and emerging area of research in Computational Intelligence that has achieved various promising results in different areas of application, e.g., learning, classification, anomaly detection, and (function) optimization. An increasing and often stated problem of the field is the lack of a theoretical basis for AIS as most work so far only concentrated on the direct application of immune principles. In this thesis, we concentrate on optimization applications of AIS. It can easily be recognized that with respect to this application area, the work done previously mainly covers convergence analysis. To the best of our knowledge this thesis constitutes the first rigorous runtime analyses of immune-inspired operators and thus adds substantially to the demanded theoretical foundation of AIS. We consider two very common aspects of AIS. On one hand, we provide a theoretical analysis for different hypermutation operators frequently employed in AIS. On the other hand, we examine a popular diversity mechanism named aging. We compare our findings with corresponding results from the analysis of other nature-inspired randomized search heuristics, in particular evolutionary algorithms. Moreover, we focus on the practical implications of our theoretical results in order to bridge the gap between theory and practice. Therefore, we derive guidelines for parameter settings and point out typical situations where certain concepts seem promising. These analyses contribute to the understanding of how AIS actually work and in which applications they excel other randomized search heuristics
    corecore