14 research outputs found

    Standard Steady State Genetic Algorithms Can Hillclimb Faster than Mutation-only Evolutionary Algorithms

    Get PDF
    Explaining to what extent the real power of genetic algorithms lies in the ability of crossover to recombine individuals into higher quality solutions is an important problem in evolutionary computation. In this paper we show how the interplay between mutation and crossover can make genetic algorithms hillclimb faster than their mutation-only counterparts. We devise a Markov Chain framework that allows to rigorously prove an upper bound on the runtime of standard steady state genetic algorithms to hillclimb the ONEMAX function. The bound establishes that the steady-state genetic algorithms are 25% faster than all standard bit mutation-only evolutionary algorithms with static mutation rate up to lower order terms for moderate population sizes. The analysis also suggests that larger populations may be faster than populations of size 2. We present a lower bound for a greedy (2+1) GA that matches the upper bound for populations larger than 2, rigorously proving that 2 individuals cannot outperform larger population sizes under greedy selection and greedy crossover up to lower order terms. In complementary experiments the best population size is greater than 2 and the greedy genetic algorithms are faster than standard ones, further suggesting that the derived lower bound also holds for the standard steady state (2+1) GA

    On the Combined Impact of Population Size and Sub-problem Selection in MOEA/D

    Get PDF
    This paper intends to understand and to improve the working principle of decomposition-based multi-objective evolutionary algorithms. We review the design of the well-established Moea/d framework to support the smooth integration of different strategies for sub-problem selection, while emphasizing the role of the population size and of the number of offspring created at each generation. By conducting a comprehensive empirical analysis on a wide range of multi-and many-objective combinatorial NK landscapes, we provide new insights into the combined effect of those parameters on the anytime performance of the underlying search process. In particular, we show that even a simple random strategy selecting sub-problems at random outperforms existing sophisticated strategies. We also study the sensitivity of such strategies with respect to the ruggedness and the objective space dimension of the target problem.Comment: European Conference on Evolutionary Computation in Combinatorial Optimization, Apr 2020, Seville, Spai

    The (1+(λ,λ))(1+(\lambda,\lambda)) Genetic Algorithm for Permutations

    Full text link
    The (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm is a bright example of an evolutionary algorithm which was developed based on the insights from theoretical findings. This algorithm uses crossover, and it was shown to asymptotically outperform all mutation-based evolutionary algorithms even on simple problems like OneMax. Subsequently it was studied on a number of other problems, but all of these were pseudo-Boolean. We aim at improving this situation by proposing an adaptation of the (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm to permutation-based problems. Such an adaptation is required, because permutations are noticeably different from bit strings in some key aspects, such as the number of possible mutations and their mutual dependence. We also present the first runtime analysis of this algorithm on a permutation-based problem called Ham whose properties resemble those of OneMax. On this problem, where the simple mutation-based algorithms have the running time of Θ(n2logn)\Theta(n^2 \log n) for problem size nn, the (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm finds the optimum in O(n2)O(n^2) fitness queries. We augment this analysis with experiments, which show that this algorithm is also fast in practice.Comment: This contribution is a slightly extended version of the paper accepted to the GECCO 2020 workshop on permutation-based problem

    Benchmarking a (μ+λ)(\mu+\lambda) Genetic Algorithm with Configurable Crossover Probability

    Get PDF
    We investigate a family of (μ+λ)(\mu+\lambda) Genetic Algorithms (GAs) which creates offspring either from mutation or by recombining two randomly chosen parents. By scaling the crossover probability, we can thus interpolate from a fully mutation-only algorithm towards a fully crossover-based GA. We analyze, by empirical means, how the performance depends on the interplay of population size and the crossover probability. Our comparison on 25 pseudo-Boolean optimization problems reveals an advantage of crossover-based configurations on several easy optimization tasks, whereas the picture for more complex optimization problems is rather mixed. Moreover, we observe that the ``fast'' mutation scheme with its are power-law distributed mutation strengths outperforms standard bit mutation on complex optimization tasks when it is combined with crossover, but performs worse in the absence of crossover. We then take a closer look at the surprisingly good performance of the crossover-based (μ+λ)(\mu+\lambda) GAs on the well-known LeadingOnes benchmark problem. We observe that the optimal crossover probability increases with increasing population size μ\mu. At the same time, it decreases with increasing problem dimension, indicating that the advantages of the crossover are not visible in the asymptotic view classically applied in runtime analysis. We therefore argue that a mathematical investigation for fixed dimensions might help us observe effects which are not visible when focusing exclusively on asymptotic performance bounds

    Automated Configuration of Genetic Algorithms by Tuning for Anytime Performance

    Full text link
    Finding the best configuration of algorithms' hyperparameters for a given optimization problem is an important task in evolutionary computation. We compare in this work the results of four different hyperparameter tuning approaches for a family of genetic algorithms on 25 diverse pseudo-Boolean optimization problems. More precisely, we compare previously obtained results from a grid search with those obtained from three automated configuration techniques: iterated racing, mixed-integer parallel efficient global optimization, and mixed-integer evolutionary strategies. Using two different cost metrics, expected running time and the area under the empirical cumulative distribution function curve, we find that in several cases the best configurations with respect to expected running time are obtained when using the area under the empirical cumulative distribution function curve as the cost metric during the configuration process. Our results suggest that even when interested in expected running time performance, it might be preferable to use anytime performance measures for the configuration task. We also observe that tuning for expected running time is much more sensitive with respect to the budget that is allocated to the target algorithms

    Artificial immune systems can find arbitrarily good approximations for the NP-hard number partitioning problem

    Get PDF
    Typical artificial immune system (AIS) operators such as hypermutations with mutation potential and ageing allow to efficiently overcome local optima from which evolutionary algorithms (EAs) struggle to escape. Such behaviour has been shown for artificial example functions constructed especially to show difficulties that EAs may encounter during the optimisation process. However, no evidence is available indicating that these two operators have similar behaviour also in more realistic problems. In this paper we perform an analysis for the standard NP-hard Partition problem from combinatorial optimisation and rigorously show that hypermutations and ageing allow AISs to efficiently escape from local optima where standard EAs require exponential time. As a result we prove that while EAs and random local search (RLS) may get trapped on 4/3 approximations, AISs find arbitrarily good approximate solutions of ratio (1+) within n(−(2/)−1)(1 − )−2e322/ + 2n322/ + 2n3 function evaluations in expectation. This expectation is polynomial in the problem size and exponential only in 1/
    corecore