100,614 research outputs found
The Evolutionary Unfolding of Complexity
We analyze the population dynamics of a broad class of fitness functions that
exhibit epochal evolution---a dynamical behavior, commonly observed in both
natural and artificial evolutionary processes, in which long periods of stasis
in an evolving population are punctuated by sudden bursts of change. Our
approach---statistical dynamics---combines methods from both statistical
mechanics and dynamical systems theory in a way that offers an alternative to
current ``landscape'' models of evolutionary optimization. We describe the
population dynamics on the macroscopic level of fitness classes or phenotype
subbasins, while averaging out the genotypic variation that is consistent with
a macroscopic state. Metastability in epochal evolution occurs solely at the
macroscopic level of the fitness distribution. While a balance between
selection and mutation maintains a quasistationary distribution of fitness,
individuals diffuse randomly through selectively neutral subbasins in genotype
space. Sudden innovations occur when, through this diffusion, a genotypic
portal is discovered that connects to a new subbasin of higher fitness
genotypes. In this way, we identify innovations with the unfolding and
stabilization of a new dimension in the macroscopic state space. The
architectural view of subbasins and portals in genotype space clarifies how
frozen accidents and the resulting phenotypic constraints guide the evolution
to higher complexity.Comment: 28 pages, 5 figure
Towards efficient multiobjective optimization: multiobjective statistical criterions
The use of Surrogate Based Optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of equivalent solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of making those decisions upfront). Most of the work in multiobjective optimization is focused on MultiObjective Evolutionary Algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as MultiObjective Surrogate-Based Optimization (MOSBO), may prove to be even more worthwhile than SBO methods to expedite the optimization process. In this paper, the authors propose the Efficient Multiobjective Optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the expected improvement and probability of improvement criterions to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II and SPEA2 multiobjective optimization methods with promising results
Recommended from our members
RGFGA: An efficient representation and crossover for grouping genetic algorithms
There is substantial research into genetic algorithms that are used to group large numbers of
objects into mutually exclusive subsets based upon some fitness function. However, nearly all
methods involve degeneracy to some degree.
We introduce a new representation for grouping genetic algorithms, the restricted growth function
genetic algorithm, that effectively removes all degeneracy, resulting in a more efficient search. A new crossover operator is also described that exploits a measure of similarity between chromosomes in a population. Using several synthetic datasets, we compare the performance of our representation and crossover with another well known state-of-the-art GA method, a strawman
optimisation method and a well-established statistical clustering algorithm, with encouraging results
How to shift bias: Lessons from the Baldwin effect
An inductive learning algorithm takes a set of data as input and generates a hypothesis as
output. A set of data is typically consistent with an infinite number of hypotheses;
therefore, there must be factors other than the data that determine the output of the
learning algorithm. In machine learning, these other factors are called the bias of the
learner. Classical learning algorithms have a fixed bias, implicit in their design. Recently
developed learning algorithms dynamically adjust their bias as they search for a
hypothesis. Algorithms that shift bias in this manner are not as well understood as
classical algorithms. In this paper, we show that the Baldwin effect has implications for
the design and analysis of bias shifting algorithms. The Baldwin effect was proposed in
1896, to explain how phenomena that might appear to require Lamarckian evolution
(inheritance of acquired characteristics) can arise from purely Darwinian evolution.
Hinton and Nowlan presented a computational model of the Baldwin effect in 1987. We
explore a variation on their model, which we constructed explicitly to illustrate the lessons
that the Baldwin effect has for research in bias shifting algorithms. The main lesson is that
it appears that a good strategy for shift of bias in a learning algorithm is to begin with a
weak bias and gradually shift to a strong bias
Parameter Sensitivity Analysis of Social Spider Algorithm
Social Spider Algorithm (SSA) is a recently proposed general-purpose
real-parameter metaheuristic designed to solve global numerical optimization
problems. This work systematically benchmarks SSA on a suite of 11 functions
with different control parameters. We conduct parameter sensitivity analysis of
SSA using advanced non-parametric statistical tests to generate statistically
significant conclusion on the best performing parameter settings. The
conclusion can be adopted in future work to reduce the effort in parameter
tuning. In addition, we perform a success rate test to reveal the impact of the
control parameters on the convergence speed of the algorithm
Fast calculation of multiobjective probability of improvement and expected improvement criteria for Pareto optimization
The use of surrogate based optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of competitive solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of weighting and aggregating the costs upfront). Most of the work in multiobjective optimization is focused on multiobjective evolutionary algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as multiobjective surrogate-based optimization, may prove to be even more worthwhile than SBO methods to expedite the optimization of computational expensive systems. In this paper, the authors propose the efficient multiobjective optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the probability of improvement and expected improvement criteria to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II, SPEA2 and SMS-EMOA multiobjective optimization methods
- …