637 research outputs found
Particle algorithms for optimization on binary spaces
We discuss a unified approach to stochastic optimization of pseudo-Boolean
objective functions based on particle methods, including the cross-entropy
method and simulated annealing as special cases. We point out the need for
auxiliary sampling distributions, that is parametric families on binary spaces,
which are able to reproduce complex dependency structures, and illustrate their
usefulness in our numerical experiments. We provide numerical evidence that
particle-driven optimization algorithms based on parametric families yield
superior results on strongly multi-modal optimization problems while local
search heuristics outperform them on easier problems
Problem Understanding through Landscape Theory
In order to understand the structure of a problem we need to measure some features of the problem. Some examples of measures suggested in the past are autocorrelation and fitness-distance correlation. Landscape theory, developed in the last years in the field of combinatorial optimization, provides mathematical expressions to efficiently compute statistics on optimization problems. In this paper we discuss how can we use optimización combinatoria in the context of problem understanding and present two software tools that can be used to efficiently compute the mentioned measures.Ministerio de EconomÃa y Competitividad (TIN2011-28194
Analysis of Noisy Evolutionary Optimization When Sampling Fails
In noisy evolutionary optimization, sampling is a common strategy to deal
with noise. By the sampling strategy, the fitness of a solution is evaluated
multiple times (called \emph{sample size}) independently, and its true fitness
is then approximated by the average of these evaluations. Previous studies on
sampling are mainly empirical. In this paper, we first investigate the effect
of sample size from a theoretical perspective. By analyzing the (1+1)-EA on the
noisy LeadingOnes problem, we show that as the sample size increases, the
running time can reduce from exponential to polynomial, but then return to
exponential. This suggests that a proper sample size is crucial in practice.
Then, we investigate what strategies can work when sampling with any fixed
sample size fails. By two illustrative examples, we prove that using parent or
offspring populations can be better. Finally, we construct an artificial noisy
example to show that when using neither sampling nor populations is effective,
adaptive sampling (i.e., sampling with an adaptive sample size) can work. This,
for the first time, provides a theoretical support for the use of adaptive
sampling
Unifying a Geometric Framework of Evolutionary Algorithms and Elementary Landscapes Theory
Evolutionary algorithms (EAs) are randomised general-purpose strategies, inspired by natural evolution, often used for finding (near) optimal solutions to problems in combinatorial optimisation. Over the last 50 years, many theoretical approaches in evolutionary computation have been developed to analyse the performance of EAs, design EAs or measure problem difficulty via fitness landscape analysis. An open challenge is to formally explain why a general class of EAs perform better, or worse, than others on a class of combinatorial problems across representations. However, the lack of a general unified theory of EAs and fitness landscapes, across problems and representations, makes it harder to characterise pairs of general classes of EAs and combinatorial problems where good performance can be guaranteed provably. This thesis explores a unification between a geometric framework of EAs and elementary landscapes theory, not tied to a specific representation nor problem, with complementary strengths in the analysis of population-based EAs and combinatorial landscapes. This unification organises around three essential aspects: search space structure induced by crossovers, search behaviour of population-based EAs and structure of fitness landscapes. First, this thesis builds a crossover classification to systematically compare crossovers in the geometric framework and elementary landscapes theory, revealing a shared general subclass of crossovers: geometric recombination P-structures, which covers well-known crossovers. The crossover classification is then extended to a general framework for axiomatically analysing the population behaviour induced by crossover classes on associated EAs. This shows the shared general class of all EAs using geometric recombination P-structures, but no mutation, always do the same abstract form of convex evolutionary search. Finally, this thesis characterises a class of globally convex combinatorial landscapes shared by the geometric framework and elementary landscapes theory: abstract convex elementary landscapes. It is formally explained why geometric recombination P-structure EAs expectedly can outperform random search on abstract convex elementary landscapes related to low-order graph Laplacian eigenvalues. Altogether, this thesis paves a way towards a general unified theory of EAs and combinatorial fitness landscapes
Elementary landscape decomposition of the 0-1 unconstrained quadratic optimization
Journal of Heuristics, 19(4), pp.711-728Landscapes’ theory provides a formal framework in which combinatorial optimization problems can be theoretically characterized as a sum of an especial kind of landscape called elementary landscape. The elementary landscape decomposition of a combinatorial optimization problem is a useful tool for understanding the problem. Such decomposition provides an additional knowledge on the problem that can be exploited to explain the behavior of some existing algorithms when they are applied to the problem or to create new search methods for the problem. In this paper we analyze the 0-1 Unconstrained Quadratic Optimization from the point of view of landscapes’ theory. We prove that the problem can be written as the sum of two elementary components and we give the exact expressions for these components. We use the landscape decomposition to compute autocorrelation measures of the problem, and show some practical applications of the decomposition.Spanish Ministry of Sci- ence and Innovation and FEDER under contract TIN2008-06491-C04-01 (the M∗ project). Andalusian Government under contract P07-TIC-03044 (DIRICOM project)
Exact Markov Chain-based Runtime Analysis of a Discrete Particle Swarm Optimization Algorithm on Sorting and OneMax
Meta-heuristics are powerful tools for solving optimization problems whose
structural properties are unknown or cannot be exploited algorithmically. We
propose such a meta-heuristic for a large class of optimization problems over
discrete domains based on the particle swarm optimization (PSO) paradigm. We
provide a comprehensive formal analysis of the performance of this algorithm on
certain "easy" reference problems in a black-box setting, namely the sorting
problem and the problem OneMAX. In our analysis we use a Markov-model of the
proposed algorithm to obtain upper and lower bounds on its expected
optimization time. Our bounds are essentially tight with respect to the
Markov-model. We show that for a suitable choice of algorithm parameters the
expected optimization time is comparable to that of known algorithms and,
furthermore, for other parameter regimes, the algorithm behaves less greedy and
more explorative, which can be desirable in practice in order to escape local
optima. Our analysis provides a precise insight on the tradeoff between
optimization time and exploration. To obtain our results we introduce the
notion of indistinguishability of states of a Markov chain and provide bounds
on the solution of a recurrence equation with non-constant coefficients by
integration
Fast Mutation in Crossover-based Algorithms
The heavy-tailed mutation operator proposed in Doerr, Le, Makhmara, and
Nguyen (GECCO 2017), called \emph{fast mutation} to agree with the previously
used language, so far was proven to be advantageous only in mutation-based
algorithms. There, it can relieve the algorithm designer from finding the
optimal mutation rate and nevertheless obtain a performance close to the one
that the optimal mutation rate gives.
In this first runtime analysis of a crossover-based algorithm using a
heavy-tailed choice of the mutation rate, we show an even stronger impact. For
the genetic algorithm optimizing the OneMax benchmark
function, we show that with a heavy-tailed mutation rate a linear runtime can
be achieved. This is asymptotically faster than what can be obtained with any
static mutation rate, and is asymptotically equivalent to the runtime of the
self-adjusting version of the parameters choice of the
genetic algorithm. This result is complemented by an empirical study which
shows the effectiveness of the fast mutation also on random satisfiable
Max-3SAT instances.Comment: This is a version of the same paper presented at GECCO 2020 completed
with the proofs which were missing because of the page limi
- …