522 research outputs found

    Runtime Analysis of the (1+(λ,λ))(1+(\lambda,\lambda)) Genetic Algorithm on Random Satisfiable 3-CNF Formulas

    Full text link
    The (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm, first proposed at GECCO 2013, showed a surprisingly good performance on so me optimization problems. The theoretical analysis so far was restricted to the OneMax test function, where this GA profited from the perfect fitness-distance correlation. In this work, we conduct a rigorous runtime analysis of this GA on random 3-SAT instances in the planted solution model having at least logarithmic average degree, which are known to have a weaker fitness distance correlation. We prove that this GA with fixed not too large population size again obtains runtimes better than Θ(nlogn)\Theta(n \log n), which is a lower bound for most evolutionary algorithms on pseudo-Boolean problems with unique optimum. However, the self-adjusting version of the GA risks reaching population sizes at which the intermediate selection of the GA, due to the weaker fitness-distance correlation, is not able to distinguish a profitable offspring from others. We show that this problem can be overcome by equipping the self-adjusting GA with an upper limit for the population size. Apart from sparse instances, this limit can be chosen in a way that the asymptotic performance does not worsen compared to the idealistic OneMax case. Overall, this work shows that the (1+(λ,λ))(1+(\lambda,\lambda)) GA can provably have a good performance on combinatorial search and optimization problems also in the presence of a weaker fitness-distance correlation.Comment: An extended abstract of this report will appear in the proceedings of the 2017 Genetic and Evolutionary Computation Conference (GECCO 2017

    Geometric semantic genetic programming for recursive boolean programs

    Get PDF
    This is the author accepted manuscript. The final version is available from ACM via the DOI in this record.Geometric Semantic Genetic Programming (GSGP) induces a unimodal fitness landscape for any problem that consists in finding a function fitting given input/output examples. Most of the work around GSGP to date has focused on real-world applications and on improving the originally proposed search operators, rather than on broadening its theoretical framework to new domains. We extend GSGP to recursive programs, a notoriously challenging domain with highly discontinuous fitness landscapes. We focus on programs that map variable-length Boolean lists to Boolean values, and design search operators that are provably efficient in the training phase and attain perfect generalization. Computational experiments complement the theory and demonstrate the superiority of the new operators to the conventional ones. This work provides new insights into the relations between program syntax and semantics, search operators and fitness landscapes, also for more general recursive domains.© 2017 Copyright held by the owner/author(s). Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    A comparison of crossover control mechanisms within single-point selection hyper-heuristics using HyFlex

    Get PDF
    Hyper-heuristics are search methodologies which operate at a higher level of abstraction than traditional search and optimisation techniques. Rather than operating on a search space of solutions directly, a hyper-heuristic searches a space of low-level heuristics or heuristic components. An iterative selection hyper-heuristic operates on a single solution, selecting and applying a low-level heuristic at each step before deciding whether to accept the resulting solution. Crossover low-level heuristics are often included in modern selection hyper-heuristic frameworks, however as they require multiple solutions to operate, a strategy is required to manage potential solutions to use as input. In this paper we investigate the use of crossover control schemes within two existing selection hyper-heuristics and observe the difference in performance when the method for managing potential solutions for crossover is modified. Firstly, we use the crossover control scheme of AdapHH, the winner of an international competition in heuristic search, in a Modified Choice Function - All Moves selection hyper-heuristic. Secondly, we replace the crossover control scheme within AdapHH with another method taken from the literature. We observe that the performance of selection hyper-heuristics using crossover low level heuristics is not independent of the choice of strategy for managing input solutions to these operators

    Benchmarking a (μ+λ)(\mu+\lambda) Genetic Algorithm with Configurable Crossover Probability

    Get PDF
    We investigate a family of (μ+λ)(\mu+\lambda) Genetic Algorithms (GAs) which creates offspring either from mutation or by recombining two randomly chosen parents. By scaling the crossover probability, we can thus interpolate from a fully mutation-only algorithm towards a fully crossover-based GA. We analyze, by empirical means, how the performance depends on the interplay of population size and the crossover probability. Our comparison on 25 pseudo-Boolean optimization problems reveals an advantage of crossover-based configurations on several easy optimization tasks, whereas the picture for more complex optimization problems is rather mixed. Moreover, we observe that the ``fast'' mutation scheme with its are power-law distributed mutation strengths outperforms standard bit mutation on complex optimization tasks when it is combined with crossover, but performs worse in the absence of crossover. We then take a closer look at the surprisingly good performance of the crossover-based (μ+λ)(\mu+\lambda) GAs on the well-known LeadingOnes benchmark problem. We observe that the optimal crossover probability increases with increasing population size μ\mu. At the same time, it decreases with increasing problem dimension, indicating that the advantages of the crossover are not visible in the asymptotic view classically applied in runtime analysis. We therefore argue that a mathematical investigation for fixed dimensions might help us observe effects which are not visible when focusing exclusively on asymptotic performance bounds

    Black-Box Complexity of the Binary Value Function

    Full text link
    The binary value function, or BinVal, has appeared in several studies in theory of evolutionary computation as one of the extreme examples of linear pseudo-Boolean functions. Its unbiased black-box complexity was previously shown to be at most log2n+2\lceil \log_2 n \rceil + 2, where nn is the problem size. We augment it with an upper bound of log2n+2.42141558o(1)\log_2 n + 2.42141558 - o(1), which is more precise for many values of nn. We also present a lower bound of log2n+1.1186406o(1)\log_2 n + 1.1186406 - o(1). Additionally, we prove that BinVal is an easiest function among all unimodal pseudo-Boolean functions at least for unbiased algorithms.Comment: 24 pages, one figure. An extended two-page abstract of this work will appear in proceedings of the Genetic and Evolutionary Computation Conference, GECCO'1
    corecore