19,437 research outputs found

    On the Runtime of Randomized Local Search and Simple Evolutionary Algorithms for Dynamic Makespan Scheduling

    Get PDF
    Evolutionary algorithms have been frequently used for dynamic optimization problems. With this paper, we contribute to the theoretical understanding of this research area. We present the first computational complexity analysis of evolutionary algorithms for a dynamic variant of a classical combinatorial optimization problem, namely makespan scheduling. We study the model of a strong adversary which is allowed to change one job at regular intervals. Furthermore, we investigate the setting of random changes. Our results show that randomized local search and a simple evolutionary algorithm are very effective in dynamically tracking changes made to the problem instance.Comment: Conference version appears at IJCAI 201

    Runtime Analysis for Self-adaptive Mutation Rates

    Full text link
    We propose and analyze a self-adaptive version of the (1,λ)(1,\lambda) evolutionary algorithm in which the current mutation rate is part of the individual and thus also subject to mutation. A rigorous runtime analysis on the OneMax benchmark function reveals that a simple local mutation scheme for the rate leads to an expected optimization time (number of fitness evaluations) of O(nλ/log⁥λ+nlog⁥n)O(n\lambda/\log\lambda+n\log n) when λ\lambda is at least Cln⁥nC \ln n for some constant C>0C > 0. For all values of λ≄Cln⁥n\lambda \ge C \ln n, this performance is asymptotically best possible among all λ\lambda-parallel mutation-based unbiased black-box algorithms. Our result shows that self-adaptation in evolutionary computation can find complex optimal parameter settings on the fly. At the same time, it proves that a relatively complicated self-adjusting scheme for the mutation rate proposed by Doerr, Gie{\ss}en, Witt, and Yang~(GECCO~2017) can be replaced by our simple endogenous scheme. On the technical side, the paper contributes new tools for the analysis of two-dimensional drift processes arising in the analysis of dynamic parameter choices in EAs, including bounds on occupation probabilities in processes with non-constant drift

    A simple model of unbounded evolutionary versatility as a largest-scale trend in organismal evolution

    Get PDF
    The idea that there are any large-scale trends in the evolution of biological organisms is highly controversial. It is commonly believed, for example, that there is a large-scale trend in evolution towards increasing complexity, but empirical and theoretical arguments undermine this belief. Natural selection results in organisms that are well adapted to their local environments, but it is not clear how local adaptation can produce a global trend. In this paper, I present a simple computational model, in which local adaptation to a randomly changing environment results in a global trend towards increasing evolutionary versatility. In this model, for evolutionary versatility to increase without bound, the environment must be highly dynamic. The model also shows that unbounded evolutionary versatility implies an accelerating evolutionary pace. I believe that unbounded increase in evolutionary versatility is a large-scale trend in evolution. I discuss some of the testable predictions about organismal evolution that are suggested by the model

    The Right Mutation Strength for Multi-Valued Decision Variables

    Full text link
    The most common representation in evolutionary computation are bit strings. This is ideal to model binary decision variables, but less useful for variables taking more values. With very little theoretical work existing on how to use evolutionary algorithms for such optimization problems, we study the run time of simple evolutionary algorithms on some OneMax-like functions defined over Ω={0,1,
,r−1}n\Omega = \{0, 1, \dots, r-1\}^n. More precisely, we regard a variety of problem classes requesting the component-wise minimization of the distance to an unknown target vector z∈Ωz \in \Omega. For such problems we see a crucial difference in how we extend the standard-bit mutation operator to these multi-valued domains. While it is natural to select each position of the solution vector to be changed independently with probability 1/n1/n, there are various ways to then change such a position. If we change each selected position to a random value different from the original one, we obtain an expected run time of Θ(nrlog⁥n)\Theta(nr \log n). If we change each selected position by either +1+1 or −1-1 (random choice), the optimization time reduces to Θ(nr+nlog⁥n)\Theta(nr + n\log n). If we use a random mutation strength i∈{0,1,
,r−1}ni \in \{0,1,\ldots,r-1\}^n with probability inversely proportional to ii and change the selected position by either +i+i or −i-i (random choice), then the optimization time becomes Θ(nlog⁥(r)(log⁥(n)+log⁥(r)))\Theta(n \log(r)(\log(n)+\log(r))), bringing down the dependence on rr from linear to polylogarithmic. One of our results depends on a new variant of the lower bounding multiplicative drift theorem.Comment: an extended abstract of this work is to appear at GECCO 201
    • 

    corecore