1,952 research outputs found
How Crossover Speeds Up Building-Block Assembly in Genetic Algorithms
We re-investigate a fundamental question: how effective is crossover in Genetic Algorithms in combining building blocks of good solutions? Although this has been discussed controversially for decades, we are still lacking a rigorous and intuitive answer. We provide such answers for royal road functions and OneMax, where every bit is a building block. For the latter we show that using crossover makes every (\mu+\lambda) Genetic Algorithm at least twice as fast as the fastest evolutionary algorithm using only standard bit mutation, up to small-order terms and for moderate \mu and \lambda. Crossover is beneficial because it effectively turns fitness-neutral mutations into improvements by combining the right building blocks at a later stage. Compared to mutation-based evolutionary algorithms, this makes multi-bit mutations more useful. Introducing crossover changes the optimal mutation rate on OneMax from 1/n to (1+\sqrt{5})/2 \cdot 1/n \approx 1.618/n. This holds both for uniform crossover and k-point crossover. Experiments and statistical tests confirm that our findings apply to a broad class of building-block functions
Evolutionary Dynamics in a Simple Model of Self-Assembly
We investigate the evolutionary dynamics of an idealised model for the robust
self-assembly of two-dimensional structures called polyominoes. The model
includes rules that encode interactions between sets of square tiles that drive
the self-assembly process. The relationship between the model's rule set and
its resulting self-assembled structure can be viewed as a genotype-phenotype
map and incorporated into a genetic algorithm. The rule sets evolve under
selection for specified target structures. The corresponding, complex fitness
landscape generates rich evolutionary dynamics as a function of parameters such
as the population size, search space size, mutation rate, and method of
recombination. Furthermore, these systems are simple enough that in some cases
the associated model genome space can be completely characterised, shedding
light on how the evolutionary dynamics depends on the detailed structure of the
fitness landscape. Finally, we apply the model to study the emergence of the
preference for dihedral over cyclic symmetry observed for homomeric protein
tetramers
Theory and practice of population diversity in evolutionary computation
Divergence of character is a cornerstone of natural evolution. On the contrary, evolutionary optimization processes are plagued by an endemic lack of population diversity: all candidate solutions eventually crowd the very same areas in the search space. The problem is usually labeled with the oxymoron “premature convergence” and has very different consequences on the different applications, almost all deleterious. At the same time, case studies from theoretical runtime analyses irrefutably demonstrate the benefits of diversity. This tutorial will give an introduction into the area of “diversity promotion”: we will define the term “diversity” in the context of Evolutionary Computation, showing how practitioners tried, with mixed results, to promote it. Then, we will analyze the benefits brought by population diversity in specific contexts, namely global exploration and enhancing the power of crossover. To this end, we will survey recent results from rigorous runtime analysis on selected problems. The presented analyses rigorously quantify the performance of evolutionary algorithms in the light of population diversity, laying the foundation for a rigorous understanding of how search dynamics are affected by the presence or absence of diversity and the introduction of diversity mechanisms
Better Fixed-Arity Unbiased Black-Box Algorithms
In their GECCO'12 paper, Doerr and Doerr proved that the -ary unbiased
black-box complexity of OneMax on bits is for . We propose an alternative strategy for achieving this unbiased black-box
complexity when . While it is based on the same idea of
block-wise optimization, it uses -ary unbiased operators in a different way.
For each block of size we set up, in queries, a virtual
coordinate system, which enables us to use an arbitrary unrestricted algorithm
to optimize this block. This is possible because this coordinate system
introduces a bijection between unrestricted queries and a subset of -ary
unbiased operators. We note that this technique does not depend on OneMax being
solved and can be used in more general contexts.
This together constitutes an algorithm which is conceptually simpler than the
one by Doerr and Doerr, and at the same time achieves better constant factors
in the asymptotic notation. Our algorithm works in ,
where relates to . Our experimental evaluation of this algorithm
shows its efficiency already for .Comment: An extended abstract will appear at GECCO'1
Better Fixed-Arity Unbiased Black-Box Algorithms
In their GECCO'12 paper, Doerr and Doerr proved that the -ary unbiased
black-box complexity of OneMax on bits is for . We propose an alternative strategy for achieving this unbiased black-box
complexity when . While it is based on the same idea of
block-wise optimization, it uses -ary unbiased operators in a different way.
For each block of size we set up, in queries, a virtual
coordinate system, which enables us to use an arbitrary unrestricted algorithm
to optimize this block. This is possible because this coordinate system
introduces a bijection between unrestricted queries and a subset of -ary
unbiased operators. We note that this technique does not depend on OneMax being
solved and can be used in more general contexts.
This together constitutes an algorithm which is conceptually simpler than the
one by Doerr and Doerr, and at the same time achieves better constant factors
in the asymptotic notation. Our algorithm works in ,
where relates to . Our experimental evaluation of this algorithm
shows its efficiency already for .Comment: An extended abstract will appear at GECCO'1
Intrinsically Evolvable Artificial Neural Networks
Dedicated hardware implementations of neural networks promise to provide faster, lower power operation when compared to software implementations executing on processors. Unfortunately, most custom hardware implementations do not support intrinsic training of these networks on-chip. The training is typically done using offline software simulations and the obtained network is synthesized and targeted to the hardware offline. The FPGA design presented here facilitates on-chip intrinsic training of artificial neural networks. Block-based neural networks (BbNN), the type of artificial neural networks implemented here, are grid-based networks neuron blocks. These networks are trained using genetic algorithms to simultaneously optimize the network structure and the internal synaptic parameters. The design supports online structure and parameter updates, and is an intrinsically evolvable BbNN platform supporting functional-level hardware evolution. Functional-level evolvable hardware (EHW) uses evolutionary algorithms to evolve interconnections and internal parameters of functional modules in reconfigurable computing systems such as FPGAs. Functional modules can be any hardware modules such as multipliers, adders, and trigonometric functions. In the implementation presented, the functional module is a neuron block. The designed platform is suitable for applications in dynamic environments, and can be adapted and retrained online. The online training capability has been demonstrated using a case study. A performance characterization model for RC implementations of BbNNs has also been presented
Black-Box Complexity of the Binary Value Function
The binary value function, or BinVal, has appeared in several studies in
theory of evolutionary computation as one of the extreme examples of linear
pseudo-Boolean functions. Its unbiased black-box complexity was previously
shown to be at most , where is the problem
size. We augment it with an upper bound of ,
which is more precise for many values of . We also present a lower bound of
. Additionally, we prove that BinVal is an easiest
function among all unimodal pseudo-Boolean functions at least for unbiased
algorithms.Comment: 24 pages, one figure. An extended two-page abstract of this work will
appear in proceedings of the Genetic and Evolutionary Computation Conference,
GECCO'1
Elitist Schema Overlays: A Multi-Parent Genetic Operator
Genetic Algorithms are programs inspired by natural evolution used to solve difficult problems in Mathematics and Computer Science. The theoretical foundations of Genetic Algorithms, the schema theorem and the building-block hypothesis, state that the success of Genetic Algorithms stems from the propagation of fit genetic subsequences. Multi-parent operators were shown to increase the performance of Genetic Algorithms by increasing the disruptivity of genetic operations. Disruptive genetic operators help prevent suboptimal genetic sequences from propagating into future generations, which leads to an improved fitness for the population over time. In this paper we explore the use of a novel multi-parent genetic operator, the elitist schema overlay, which propagates the matching segments in the genetic sequences of the elite subpopulation to bias the global search towards the best known solutions. We investigate the parameters that drive the behavior of elitist schema overlays to determine the most successful model, and we compare this to successful multi-parent and traditional genetic operators from the literature
Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings
While evolutionary algorithms are known to be very successful for a broad
range of applications, the algorithm designer is often left with many
algorithmic choices, for example, the size of the population, the mutation
rates, and the crossover rates of the algorithm. These parameters are known to
have a crucial influence on the optimization time, and thus need to be chosen
carefully, a task that often requires substantial efforts. Moreover, the
optimal parameters can change during the optimization process. It is therefore
of great interest to design mechanisms that dynamically choose best-possible
parameters. An example for such an update mechanism is the one-fifth success
rule for step-size adaption in evolutionary strategies. While in continuous
domains this principle is well understood also from a mathematical point of
view, no comparable theory is available for problems in discrete domains.
In this work we show that the one-fifth success rule can be effective also in
discrete settings. We regard the ~GA proposed in
[Doerr/Doerr/Ebel: From black-box complexity to designing new genetic
algorithms, TCS 2015]. We prove that if its population size is chosen according
to the one-fifth success rule then the expected optimization time on
\textsc{OneMax} is linear. This is better than what \emph{any} static
population size can achieve and is asymptotically optimal also among
all adaptive parameter choices.Comment: This is the full version of a paper that is to appear at GECCO 201
- …