58,126 research outputs found
Computational Complexity Results for Genetic Programming and the Sorting Problem
Genetic Programming (GP) has found various applications. Understanding this
type of algorithm from a theoretical point of view is a challenging task. The
first results on the computational complexity of GP have been obtained for
problems with isolated program semantics. With this paper, we push forward the
computational complexity analysis of GP on a problem with dependent program
semantics. We study the well-known sorting problem in this context and analyze
rigorously how GP can deal with different measures of sortedness.Comment: 12 page
Statistical and Computational Tradeoff in Genetic Algorithm-Based Estimation
When a Genetic Algorithm (GA), or a stochastic algorithm in general, is
employed in a statistical problem, the obtained result is affected by both
variability due to sampling, that refers to the fact that only a sample is
observed, and variability due to the stochastic elements of the algorithm. This
topic can be easily set in a framework of statistical and computational
tradeoff question, crucial in recent problems, for which statisticians must
carefully set statistical and computational part of the analysis, taking
account of some resource or time constraints. In the present work we analyze
estimation problems tackled by GAs, for which variability of estimates can be
decomposed in the two sources of variability, considering some constraints in
the form of cost functions, related to both data acquisition and runtime of the
algorithm. Simulation studies will be presented to discuss the statistical and
computational tradeoff question.Comment: 17 pages, 5 figure
Information content of colored motifs in complex networks
We study complex networks in which the nodes of the network are tagged with
different colors depending on the functionality of the nodes (colored graphs),
using information theory applied to the distribution of motifs in such
networks. We find that colored motifs can be viewed as the building blocks of
the networks (much more so than the uncolored structural motifs can be) and
that the relative frequency with which these motifs appear in the network can
be used to define the information content of the network. This information is
defined in such a way that a network with random coloration (but keeping the
relative number of nodes with different colors the same) has zero color
information content. Thus, colored motif information captures the
exceptionality of coloring in the motifs that is maintained via selection. We
study the motif information content of the C. elegans brain as well as the
evolution of colored motif information in networks that reflect the interaction
between instructions in genomes of digital life organisms. While we find that
colored motif information appears to capture essential functionality in the C.
elegans brain (where the color assignment of nodes is straightforward) it is
not obvious whether the colored motif information content always increases
during evolution, as would be expected from a measure that captures network
complexity. For a single choice of color assignment of instructions in the
digital life form Avida, we find rather that colored motif information content
increases or decreases during evolution, depending on how the genomes are
organized, and therefore could be an interesting tool to dissect genomic
rearrangements.Comment: 21 pages, 8 figures, to appear in Artificial Lif
A Computationally Efficient Limited Memory CMA-ES for Large Scale Optimization
We propose a computationally efficient limited memory Covariance Matrix
Adaptation Evolution Strategy for large scale optimization, which we call the
LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for
numerical optimization of non-linear, non-convex optimization problems in
continuous domain. Inspired by the limited memory BFGS method of Liu and
Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a
covariance matrix reproduced from direction vectors selected during the
optimization process. The decomposition of the covariance matrix into Cholesky
factors allows to reduce the time and memory complexity of the sampling to
, where is the number of decision variables. When is large
(e.g., > 1000), even relatively small values of (e.g., ) are
sufficient to efficiently solve fully non-separable problems and to reduce the
overall run-time.Comment: Genetic and Evolutionary Computation Conference (GECCO'2014) (2014
- …