3,193 research outputs found

    Runtime Analysis for Self-adaptive Mutation Rates

    Full text link
    We propose and analyze a self-adaptive version of the (1,Ī»)(1,\lambda) evolutionary algorithm in which the current mutation rate is part of the individual and thus also subject to mutation. A rigorous runtime analysis on the OneMax benchmark function reveals that a simple local mutation scheme for the rate leads to an expected optimization time (number of fitness evaluations) of O(nĪ»/logā”Ī»+nlogā”n)O(n\lambda/\log\lambda+n\log n) when Ī»\lambda is at least Clnā”nC \ln n for some constant C>0C > 0. For all values of Ī»ā‰„Clnā”n\lambda \ge C \ln n, this performance is asymptotically best possible among all Ī»\lambda-parallel mutation-based unbiased black-box algorithms. Our result shows that self-adaptation in evolutionary computation can find complex optimal parameter settings on the fly. At the same time, it proves that a relatively complicated self-adjusting scheme for the mutation rate proposed by Doerr, Gie{\ss}en, Witt, and Yang~(GECCO~2017) can be replaced by our simple endogenous scheme. On the technical side, the paper contributes new tools for the analysis of two-dimensional drift processes arising in the analysis of dynamic parameter choices in EAs, including bounds on occupation probabilities in processes with non-constant drift

    Self-adaptation of mutation rates in non-elitist populations

    Get PDF
    The runtime of evolutionary algorithms (EAs) depends critically on their parameter settings, which are often problem-specific. Automated schemes for parameter tuning have been developed to alleviate the high costs of manual parameter tuning. Experimental results indicate that self-adaptation, where parameter settings are encoded in the genomes of individuals, can be effective in continuous optimisation. However, results in discrete optimisation have been less conclusive. Furthermore, a rigorous runtime analysis that explains how self adaptation can lead to asymptotic speedups has been missing. This paper provides the first such analysis for discrete, population-based EAs. We apply level-based analysis to show how a self-adaptive EA is capable of fine-tuning its mutation rate, leading to exponential speedups over EAs using fixed mutation rates

    Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings

    Full text link
    While evolutionary algorithms are known to be very successful for a broad range of applications, the algorithm designer is often left with many algorithmic choices, for example, the size of the population, the mutation rates, and the crossover rates of the algorithm. These parameters are known to have a crucial influence on the optimization time, and thus need to be chosen carefully, a task that often requires substantial efforts. Moreover, the optimal parameters can change during the optimization process. It is therefore of great interest to design mechanisms that dynamically choose best-possible parameters. An example for such an update mechanism is the one-fifth success rule for step-size adaption in evolutionary strategies. While in continuous domains this principle is well understood also from a mathematical point of view, no comparable theory is available for problems in discrete domains. In this work we show that the one-fifth success rule can be effective also in discrete settings. We regard the (1+(Ī»,Ī»))(1+(\lambda,\lambda))~GA proposed in [Doerr/Doerr/Ebel: From black-box complexity to designing new genetic algorithms, TCS 2015]. We prove that if its population size is chosen according to the one-fifth success rule then the expected optimization time on \textsc{OneMax} is linear. This is better than what \emph{any} static population size Ī»\lambda can achieve and is asymptotically optimal also among all adaptive parameter choices.Comment: This is the full version of a paper that is to appear at GECCO 201

    Self-adaptation in non-elitist evolutionary algorithms on discrete problems with unknown structure

    Get PDF
    A key challenge to make effective use of evolutionary algorithms is to choose appropriate settings for their parameters. However, the appropriate parameter setting generally depends on the structure of the optimisation problem, which is often unknown to the user. Non-deterministic parameter control mechanisms adjust parameters using information obtained from the evolutionary process. Self-adaptation -- where parameter settings are encoded in the chromosomes of individuals and evolve through mutation and crossover -- is a popular parameter control mechanism in evolutionary strategies. However, there is little theoretical evidence that self-adaptation is effective, and self-adaptation has largely been ignored by the discrete evolutionary computation community. Here we show through a theoretical runtime analysis that a non-elitist, discrete evolutionary algorithm which self-adapts its mutation rate not only outperforms EAs which use static mutation rates on \leadingones, but also improves asymptotically on an EA using a state-of-the-art control mechanism. The structure of this problem depends on a parameter kk, which is \emph{a priori} unknown to the algorithm, and which is needed to appropriately set a fixed mutation rate. The self-adaptive EA achieves the same asymptotic runtime as if this parameter was known to the algorithm beforehand, which is an asymptotic speedup for this problem compared to all other EAs previously studied. An experimental study of how the mutation-rates evolve show that they respond adequately to a diverse range of problem structures. These results suggest that self-adaptation should be adopted more broadly as a parameter control mechanism in discrete, non-elitist evolutionary algorithms.Comment: To appear in IEEE Transactions of Evolutionary Computatio

    Self-adaptation of Mutation Rates in Non-elitist Populations

    Get PDF
    The runtime of evolutionary algorithms (EAs) depends critically on their parameter settings, which are often problem-specific. Automated schemes for parameter tuning have been developed to alleviate the high costs of manual parameter tuning. Experimental results indicate that self-adaptation, where parameter settings are encoded in the genomes of individuals, can be effective in continuous optimisation. However, results in discrete optimisation have been less conclusive. Furthermore, a rigorous runtime analysis that explains how self-adaptation can lead to asymptotic speedups has been missing. This paper provides the first such analysis for discrete, population-based EAs. We apply level-based analysis to show how a self-adaptive EA is capable of fine-tuning its mutation rate, leading to exponential speedups over EAs using fixed mutation rates.Comment: To appear in the Proceedings of the 14th International Conference on Parallel Problem Solving from Nature (PPSN

    Towards a Theory-Guided Benchmarking Suite for Discrete Black-Box Optimization Heuristics: Profiling (1+Ī»)(1+\lambda) EA Variants on OneMax and LeadingOnes

    Full text link
    Theoretical and empirical research on evolutionary computation methods complement each other by providing two fundamentally different approaches towards a better understanding of black-box optimization heuristics. In discrete optimization, both streams developed rather independently of each other, but we observe today an increasing interest in reconciling these two sub-branches. In continuous optimization, the COCO (COmparing Continuous Optimisers) benchmarking suite has established itself as an important platform that theoreticians and practitioners use to exchange research ideas and questions. No widely accepted equivalent exists in the research domain of discrete black-box optimization. Marking an important step towards filling this gap, we adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics. In this documentation we demonstrate how this test bed can be used to profile the performance of evolutionary algorithms. More concretely, we study the optimization behavior of several (1+Ī»)(1+\lambda) EA variants on the two benchmark problems OneMax and LeadingOnes. This comparison motivates a refined analysis for the optimization time of the (1+Ī»)(1+\lambda) EA on LeadingOnes

    Runtime Analysis of the (1+(Ī»,Ī»))(1+(\lambda,\lambda)) Genetic Algorithm on Random Satisfiable 3-CNF Formulas

    Full text link
    The (1+(Ī»,Ī»))(1+(\lambda,\lambda)) genetic algorithm, first proposed at GECCO 2013, showed a surprisingly good performance on so me optimization problems. The theoretical analysis so far was restricted to the OneMax test function, where this GA profited from the perfect fitness-distance correlation. In this work, we conduct a rigorous runtime analysis of this GA on random 3-SAT instances in the planted solution model having at least logarithmic average degree, which are known to have a weaker fitness distance correlation. We prove that this GA with fixed not too large population size again obtains runtimes better than Ī˜(nlogā”n)\Theta(n \log n), which is a lower bound for most evolutionary algorithms on pseudo-Boolean problems with unique optimum. However, the self-adjusting version of the GA risks reaching population sizes at which the intermediate selection of the GA, due to the weaker fitness-distance correlation, is not able to distinguish a profitable offspring from others. We show that this problem can be overcome by equipping the self-adjusting GA with an upper limit for the population size. Apart from sparse instances, this limit can be chosen in a way that the asymptotic performance does not worsen compared to the idealistic OneMax case. Overall, this work shows that the (1+(Ī»,Ī»))(1+(\lambda,\lambda)) GA can provably have a good performance on combinatorial search and optimization problems also in the presence of a weaker fitness-distance correlation.Comment: An extended abstract of this report will appear in the proceedings of the 2017 Genetic and Evolutionary Computation Conference (GECCO 2017

    An Evolutionary Algorithm for the Estimation of Threshold Vector Error Correction Models

    Get PDF
    We develop an evolutionary algorithm to estimate Threshold Vector Error Correction models (TVECM) with more than two cointegrated variables. Since disregarding a threshold in cointegration models renders standard approaches to the estimation of the cointegration vectors inefficient, TVECM necessitate a simultaneous estimation of the cointegration vector(s) and the threshold. As far as two cointegrated variables are considered this is commonly achieved by a grid search. However, grid search quickly becomes computationally unfeasible if more than two variables are cointegrated. Therefore, the likelihood function has to be maximized using heuristic approaches. Depending on the precise problem structure the evolutionary approach developed in the present paper for this purpose saves 90 to 99 per cent of the computation time of a grid search.evolutionary strategy, genetic algorithm, TVECM
    • ā€¦
    corecore