4,390 research outputs found

    The Right Mutation Strength for Multi-Valued Decision Variables

    Full text link
    The most common representation in evolutionary computation are bit strings. This is ideal to model binary decision variables, but less useful for variables taking more values. With very little theoretical work existing on how to use evolutionary algorithms for such optimization problems, we study the run time of simple evolutionary algorithms on some OneMax-like functions defined over Ω={0,1,,r1}n\Omega = \{0, 1, \dots, r-1\}^n. More precisely, we regard a variety of problem classes requesting the component-wise minimization of the distance to an unknown target vector zΩz \in \Omega. For such problems we see a crucial difference in how we extend the standard-bit mutation operator to these multi-valued domains. While it is natural to select each position of the solution vector to be changed independently with probability 1/n1/n, there are various ways to then change such a position. If we change each selected position to a random value different from the original one, we obtain an expected run time of Θ(nrlogn)\Theta(nr \log n). If we change each selected position by either +1+1 or 1-1 (random choice), the optimization time reduces to Θ(nr+nlogn)\Theta(nr + n\log n). If we use a random mutation strength i{0,1,,r1}ni \in \{0,1,\ldots,r-1\}^n with probability inversely proportional to ii and change the selected position by either +i+i or i-i (random choice), then the optimization time becomes Θ(nlog(r)(log(n)+log(r)))\Theta(n \log(r)(\log(n)+\log(r))), bringing down the dependence on rr from linear to polylogarithmic. One of our results depends on a new variant of the lower bounding multiplicative drift theorem.Comment: an extended abstract of this work is to appear at GECCO 201

    Runtime Analysis of the (1+(λ,λ))(1+(\lambda,\lambda)) Genetic Algorithm on Random Satisfiable 3-CNF Formulas

    Full text link
    The (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm, first proposed at GECCO 2013, showed a surprisingly good performance on so me optimization problems. The theoretical analysis so far was restricted to the OneMax test function, where this GA profited from the perfect fitness-distance correlation. In this work, we conduct a rigorous runtime analysis of this GA on random 3-SAT instances in the planted solution model having at least logarithmic average degree, which are known to have a weaker fitness distance correlation. We prove that this GA with fixed not too large population size again obtains runtimes better than Θ(nlogn)\Theta(n \log n), which is a lower bound for most evolutionary algorithms on pseudo-Boolean problems with unique optimum. However, the self-adjusting version of the GA risks reaching population sizes at which the intermediate selection of the GA, due to the weaker fitness-distance correlation, is not able to distinguish a profitable offspring from others. We show that this problem can be overcome by equipping the self-adjusting GA with an upper limit for the population size. Apart from sparse instances, this limit can be chosen in a way that the asymptotic performance does not worsen compared to the idealistic OneMax case. Overall, this work shows that the (1+(λ,λ))(1+(\lambda,\lambda)) GA can provably have a good performance on combinatorial search and optimization problems also in the presence of a weaker fitness-distance correlation.Comment: An extended abstract of this report will appear in the proceedings of the 2017 Genetic and Evolutionary Computation Conference (GECCO 2017

    From Understanding Genetic Drift to a Smart-Restart Parameter-less Compact Genetic Algorithm

    Full text link
    One of the key difficulties in using estimation-of-distribution algorithms is choosing the population size(s) appropriately: Too small values lead to genetic drift, which can cause enormous difficulties. In the regime with no genetic drift, however, often the runtime is roughly proportional to the population size, which renders large population sizes inefficient. Based on a recent quantitative analysis which population sizes lead to genetic drift, we propose a parameter-less version of the compact genetic algorithm that automatically finds a suitable population size without spending too much time in situations unfavorable due to genetic drift. We prove a mathematical runtime guarantee for this algorithm and conduct an extensive experimental analysis on four classic benchmark problems both without and with additive centered Gaussian posterior noise. The former shows that under a natural assumption, our algorithm has a performance very similar to the one obtainable from the best problem-specific population size. The latter confirms that missing the right population size in the original cGA can be detrimental and that previous theory-based suggestions for the population size can be far away from the right values; it also shows that our algorithm as well as a previously proposed parameter-less variant of the cGA based on parallel runs avoid such pitfalls. Comparing the two parameter-less approaches, ours profits from its ability to abort runs which are likely to be stuck in a genetic drift situation.Comment: 4 figures. Extended version of a paper appearing at GECCO 202

    Runtime analysis of crowding mechanisms for multimodal optimisation

    Get PDF
    Many real-world optimisation problems lead to multimodal domains and require the identification of multiple optima. Crowding methods have been developed to maintain population diversity, to investigate many peaks in parallel and to reduce genetic drift. We present the first rigorous runtime analyses of probabilistic crowding and generalised crowding, embedded in a (mu+1)EA. In probabilistic crowding the offspring compete with their parent in a fitness-proportional selection. Generalised crowding decreases the fitness of the inferior solution by a scaling factor during selection. We consider the bimodal function TwoMax and introduce a novel and natural notion for functions with bounded gradients. For a broad range of such functions we prove that probabilistic crowding needs exponential time with overwhelming probability to find solutions significantly closer to any global optimum than those found by random search. Even when the fitness function is scaled exponentially, probabilistic crowding still fails badly. Only if the exponential's base is linear in the problem size, probabilistic crowding becomes efficient on TwoMax. A similar threshold behaviour holds for generalised crowding on TwoMax with respect to the scaling factor. Our theoretical results are accompanied by experiments for TwoMax showing that the threshold behaviours also apply to the best fitness found

    Markov Chain Analysis of Evolution Strategies on a Linear Constraint Optimization Problem

    Get PDF
    This paper analyses a (1,λ)(1,\lambda)-Evolution Strategy, a randomised comparison-based adaptive search algorithm, on a simple constraint optimisation problem. The algorithm uses resampling to handle the constraint and optimizes a linear function with a linear constraint. Two cases are investigated: first the case where the step-size is constant, and second the case where the step-size is adapted using path length control. We exhibit for each case a Markov chain whose stability analysis would allow us to deduce the divergence of the algorithm depending on its internal parameters. We show divergence at a constant rate when the step-size is constant. We sketch that with step-size adaptation geometric divergence takes place. Our results complement previous studies where stability was assumed.Comment: Amir Hussain; Zhigang Zeng; Nian Zhang. IEEE Congress on Evolutionary Computation, Jul 2014, Beijing, Chin
    corecore