22,882 research outputs found
The Evolutionary Unfolding of Complexity
We analyze the population dynamics of a broad class of fitness functions that
exhibit epochal evolution---a dynamical behavior, commonly observed in both
natural and artificial evolutionary processes, in which long periods of stasis
in an evolving population are punctuated by sudden bursts of change. Our
approach---statistical dynamics---combines methods from both statistical
mechanics and dynamical systems theory in a way that offers an alternative to
current ``landscape'' models of evolutionary optimization. We describe the
population dynamics on the macroscopic level of fitness classes or phenotype
subbasins, while averaging out the genotypic variation that is consistent with
a macroscopic state. Metastability in epochal evolution occurs solely at the
macroscopic level of the fitness distribution. While a balance between
selection and mutation maintains a quasistationary distribution of fitness,
individuals diffuse randomly through selectively neutral subbasins in genotype
space. Sudden innovations occur when, through this diffusion, a genotypic
portal is discovered that connects to a new subbasin of higher fitness
genotypes. In this way, we identify innovations with the unfolding and
stabilization of a new dimension in the macroscopic state space. The
architectural view of subbasins and portals in genotype space clarifies how
frozen accidents and the resulting phenotypic constraints guide the evolution
to higher complexity.Comment: 28 pages, 5 figure
Should We Learn Probabilistic Models for Model Checking? A New Approach and An Empirical Study
Many automated system analysis techniques (e.g., model checking, model-based
testing) rely on first obtaining a model of the system under analysis. System
modeling is often done manually, which is often considered as a hindrance to
adopt model-based system analysis and development techniques. To overcome this
problem, researchers have proposed to automatically "learn" models based on
sample system executions and shown that the learned models can be useful
sometimes. There are however many questions to be answered. For instance, how
much shall we generalize from the observed samples and how fast would learning
converge? Or, would the analysis result based on the learned model be more
accurate than the estimation we could have obtained by sampling many system
executions within the same amount of time? In this work, we investigate
existing algorithms for learning probabilistic models for model checking,
propose an evolution-based approach for better controlling the degree of
generalization and conduct an empirical study in order to answer the questions.
One of our findings is that the effectiveness of learning may sometimes be
limited.Comment: 15 pages, plus 2 reference pages, accepted by FASE 2017 in ETAP
The convergence to equilibrium of neutral genetic models
This article is concerned with the long time behavior of neutral genetic
population models, with fixed population size. We design an explicit, finite,
exact, genealogical tree based representation of stationary populations that
holds both for finite and infinite types (or alleles) models. We then analyze
the decays to the equilibrium of finite populations in terms of the convergence
to stationarity of their first common ancestor. We estimate the Lyapunov
exponent of the distribution flows with respect to the total variation norm. We
give bounds on these exponents only depending on the stability with respect to
mutation of a single individual; they are inversely proportional to the
population size parameter
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Identification of cellular automata based on incomplete observations with bounded time gaps
In this paper, the problem of identifying the cellular automata (CAs) is considered. We frame and solve this problem in the context of incomplete observations, i.e., prerecorded, incomplete configurations of the system at certain, and unknown time stamps. We consider 1-D, deterministic, two-state CAs only. An identification method based on a genetic algorithm with individuals of variable length is proposed. The experimental results show that the proposed method is highly effective. In addition, connections between the dynamical properties of CAs (Lyapunov exponents and behavioral classes) and the performance of the identification algorithm are established and analyzed
Explaining Adaptation in Genetic Algorithms With Uniform Crossover: The Hyperclimbing Hypothesis
The hyperclimbing hypothesis is a hypothetical explanation for adaptation in
genetic algorithms with uniform crossover (UGAs). Hyperclimbing is an
intuitive, general-purpose, non-local search heuristic applicable to discrete
product spaces with rugged or stochastic cost functions. The strength of this
heuristic lie in its insusceptibility to local optima when the cost function is
deterministic, and its tolerance for noise when the cost function is
stochastic. Hyperclimbing works by decimating a search space, i.e. by
iteratively fixing the values of small numbers of variables. The hyperclimbing
hypothesis holds that UGAs work by implementing efficient hyperclimbing. Proof
of concept for this hypothesis comes from the use of a novel analytic technique
involving the exploitation of algorithmic symmetry. We have also obtained
experimental results that show that a simple tweak inspired by the
hyperclimbing hypothesis dramatically improves the performance of a UGA on
large, random instances of MAX-3SAT and the Sherrington Kirkpatrick Spin
Glasses problem.Comment: 22 pages, 5 figure
Genetic algorithm dynamics on a rugged landscape
The genetic algorithm is an optimization procedure motivated by biological
evolution and is successfully applied to optimization problems in different
areas. A statistical mechanics model for its dynamics is proposed based on the
parent-child fitness correlation of the genetic operators, making it applicable
to general fitness landscapes. It is compared to a recent model based on a
maximum entropy ansatz. Finally it is applied to modeling the dynamics of a
genetic algorithm on the rugged fitness landscape of the NK model.Comment: 10 pages RevTeX, 4 figures PostScrip
- …