244 research outputs found

    New insights on neutral binary representations for evolutionary optimization

    Get PDF
    This paper studies a family of redundant binary representations NNg(l, k), which are based on the mathematical formulation of error control codes, in particular, on linear block codes, which are used to add redundancy and neutrality to the representations. The analysis of the properties of uniformity, connectivity, synonymity, locality and topology of the NNg(l, k) representations is presented, as well as the way an (1+1)-ES can be modeled using Markov chains and applied to NK fitness landscapes with adjacent neighborhood.The results show that it is possible to design synonymously redundant representations that allow an increase of the connectivity between phenotypes. For easy problems, synonymously NNg(l, k) representations, with high locality, and where it is not necessary to present high values of connectivity are the most suitable for an efficient evolutionary search. On the contrary, for difficult problems, NNg(l, k) representations with low locality, which present connectivity between intermediate to high and with intermediate values of synonymity are the best ones. These results allow to conclude that NNg(l, k) representations with better performance in NK fitness landscapes with adjacent neighborhood do not exhibit extreme values of any of the properties commonly considered in the literature of evolutionary computation. This conclusion is contrary to what one would expect when taking into account the literature recommendations. This may help understand the current difficulty to formulate redundant representations, which are proven to be successful in evolutionary computation. (C) 2016 Elsevier B.V. All rights reserved

    A survey of techniques for characterising fitness landscapes and some possible ways forward

    Get PDF
    Real-world optimisation problems are often very complex. Metaheuristics have been successful in solving many of these problems, but the difficulty in choosing the best approach can be a huge challenge for practitioners. One approach to this dilemma is to use fitness landscape analysis to better understand problems before deciding on approaches to solving the problems. However, despite extensive research on fitness landscape analysis and a large number of developed techniques, very few techniques are used in practice. This could be because fitness landscape analysis in itself can be complex. In an attempt to make fitness landscape analysis techniques accessible, this paper provides an overview of techniques from the 1980s to the present. Attributes that are important for practical implementation are highlighted and ways of adapting techniques to be more feasible or appropriate are suggested. The survey reveals the wide range of factors that can influence problem difficulty, emphasising the need for a shift in focus away from predicting problem hardness towards measuring characteristics. It is hoped that this survey will invoke renewed interest in the field of understanding complex optimisation problems and ultimately lead to better decision making on the use of appropriate metaheuristics.http://www.elsevier.com/locate/inshb201

    Explaining Adaptation in Genetic Algorithms With Uniform Crossover: The Hyperclimbing Hypothesis

    Full text link
    The hyperclimbing hypothesis is a hypothetical explanation for adaptation in genetic algorithms with uniform crossover (UGAs). Hyperclimbing is an intuitive, general-purpose, non-local search heuristic applicable to discrete product spaces with rugged or stochastic cost functions. The strength of this heuristic lie in its insusceptibility to local optima when the cost function is deterministic, and its tolerance for noise when the cost function is stochastic. Hyperclimbing works by decimating a search space, i.e. by iteratively fixing the values of small numbers of variables. The hyperclimbing hypothesis holds that UGAs work by implementing efficient hyperclimbing. Proof of concept for this hypothesis comes from the use of a novel analytic technique involving the exploitation of algorithmic symmetry. We have also obtained experimental results that show that a simple tweak inspired by the hyperclimbing hypothesis dramatically improves the performance of a UGA on large, random instances of MAX-3SAT and the Sherrington Kirkpatrick Spin Glasses problem.Comment: 22 pages, 5 figure

    Self adaptation in evolutionary algorithms

    Get PDF
    Evolutionary Algorithms are search algorithms based on the Darwinian metaphor of “Natural Selection”. Typically these algorithms maintain a population of individual solutions, each of which has a fitness attached to it, which in some way reflects the quality of the solution. The searchproceeds via the iterative generation, evaluation and possible incorporation of new individuals based on the current population, using a number of parameterisedgenetic operators. In this thesis the phenomenon of Self Adaptation of the genetic operators is investigated.A new framework for classifying adaptive algorithms is proposed, based on the scope of the adaptation, and on the nature of the transition function guiding the search through the space of possible configurations of the algorithm. Mechanisms are investigated for achieving the self adaptation of recombination and mutation operators within a genetic algorithm, and means of combining them are investigated. These are shown to produce significantly better results than any of the combinations of fixed operators tested, across a range of problem types. These new operators reduce the need for the designer of an algorithm to select appropriate choices of operators and parameters, thus aiding the implementation of geneticalgorithms. The nature of the evolving search strategies are investigated and explained in terms of the known properties of the landscapes used, and it is suggested how observations of evolving strategies on unknown landscapes may be used to categorise them, and guide further changes in other facets of the genetic algorithm.This work provides a contribution towards the study of adaptation in Evolutionary Algorithms, and towards the design of robust search algorithms for “real world” problems

    Landscapes and Effective Fitness

    Get PDF
    The concept of a fitness landscape arose in theoretical biology, while that of effective fitness has its origin in evolutionary computation. Both have emerged as useful conceptual tools with which to understand the dynamics of evolutionary processes, especially in the presence of complex genotype-phenotype relations. In this contribution we attempt to provide a unified discussion of these two approaches, discussing both their advantages and disadvantages in the context of some simple models. We also discuss how fitness and effective fitness change under various transformations of the configuration space of the underlying genetic model, concentrating on coarse-graining transformations and on a particular coordinate transformation that provides an appropriate basis for illuminating the structure and consequences of recombination

    On the limitations of the univariate marginal distribution algorithm to deception and where bivariate EDAs might help

    Get PDF
    We introduce a new benchmark problem called Deceptive Leading Blocks (DLB) to rigorously study the runtime of the Univariate Marginal Distribution Algorithm (UMDA) in the presence of epistasis and deception. We show that simple Evolutionary Algorithms (EAs) outperform the UMDA unless the selective pressure μ/λ\mu/\lambda is extremely high, where μ\mu and λ\lambda are the parent and offspring population sizes, respectively. More precisely, we show that the UMDA with a parent population size of μ=Ω(logn)\mu=\Omega(\log n) has an expected runtime of eΩ(μ)e^{\Omega(\mu)} on the DLB problem assuming any selective pressure μλ141000\frac{\mu}{\lambda} \geq \frac{14}{1000}, as opposed to the expected runtime of O(nλlogλ+n3)\mathcal{O}(n\lambda\log \lambda+n^3) for the non-elitist (μ,λ) EA(\mu,\lambda)~\text{EA} with μ/λ1/e\mu/\lambda\leq 1/e. These results illustrate inherent limitations of univariate EDAs against deception and epistasis, which are common characteristics of real-world problems. In contrast, empirical evidence reveals the efficiency of the bi-variate MIMIC algorithm on the DLB problem. Our results suggest that one should consider EDAs with more complex probabilistic models when optimising problems with some degree of epistasis and deception.Comment: To appear in the 15th ACM/SIGEVO Workshop on Foundations of Genetic Algorithms (FOGA XV), Potsdam, German

    Analysis of combinatorial search spaces for a class of NP-hard problems, An

    Get PDF
    2011 Spring.Includes bibliographical references.Given a finite but very large set of states X and a real-valued objective function ƒ defined on X, combinatorial optimization refers to the problem of finding elements of X that maximize (or minimize) ƒ. Many combinatorial search algorithms employ some perturbation operator to hill-climb in the search space. Such perturbative local search algorithms are state of the art for many classes of NP-hard combinatorial optimization problems such as maximum k-satisfiability, scheduling, and problems of graph theory. In this thesis we analyze combinatorial search spaces by expanding the objective function into a (sparse) series of basis functions. While most analyses of the distribution of function values in the search space must rely on empirical sampling, the basis function expansion allows us to directly study the distribution of function values across regions of states for combinatorial problems without the need for sampling. We concentrate on objective functions that can be expressed as bounded pseudo-Boolean functions which are NP-hard to solve in general. We use the basis expansion to construct a polynomial-time algorithm for exactly computing constant-degree moments of the objective function ƒ over arbitrarily large regions of the search space. On functions with restricted codomains, these moments are related to the true distribution by a system of linear equations. Given low moments supplied by our algorithm, we construct bounds of the true distribution of ƒ over regions of the space using a linear programming approach. A straightforward relaxation allows us to efficiently approximate the distribution and hence quickly estimate the count of states in a given region that have certain values under the objective function. The analysis is also useful for characterizing properties of specific combinatorial problems. For instance, by connecting search space analysis to the theory of inapproximability, we prove that the bound specified by Grover's maximum principle for the Max-Ek-Lin-2 problem is sharp. Moreover, we use the framework to prove certain configurations are forbidden in regions of the Max-3-Sat search space, supplying the first theoretical confirmation of empirical results by others. Finally, we show that theoretical results can be used to drive the design of algorithms in a principled manner by using the search space analysis developed in this thesis in algorithmic applications. First, information obtained from our moment retrieving algorithm can be used to direct a hill-climbing search across plateaus in the Max-k-Sat search space. Second, the analysis can be used to control the mutation rate on a (1+1) evolutionary algorithm on bounded pseudo-Boolean functions so that the offspring of each search point is maximized in expectation. For these applications, knowledge of the search space structure supplied by the analysis translates to significant gains in the performance of search
    corecore