104 research outputs found
Improved Runtime Bounds for the Univariate Marginal Distribution Algorithm via Anti-Concentration
Unlike traditional evolutionary algorithms which produce offspring via
genetic operators, Estimation of Distribution Algorithms (EDAs) sample
solutions from probabilistic models which are learned from selected
individuals. It is hoped that EDAs may improve optimisation performance on
epistatic fitness landscapes by learning variable interactions. However, hardly
any rigorous results are available to support claims about the performance of
EDAs, even for fitness functions without epistasis. The expected runtime of the
Univariate Marginal Distribution Algorithm (UMDA) on OneMax was recently shown
to be in by Dang and Lehre
(GECCO 2015). Later, Krejca and Witt (FOGA 2017) proved the lower bound
via an involved drift analysis.
We prove a bound, given some restrictions
on the population size. This implies the tight bound when , matching the runtime
of classical EAs. Our analysis uses the level-based theorem and
anti-concentration properties of the Poisson-Binomial distribution. We expect
that these generic methods will facilitate further analysis of EDAs.Comment: 19 pages, 1 figur
No Free Lunch Theorem and Black-Box Complexity Analysis for Adversarial Optimisation
Black-box optimisation is one of the important areas in optimisation. The originalNo Free Lunch (NFL) theorems highlight the limitations of traditional black-boxoptimisation and learning algorithms, serving as a theoretical foundation for tradi-tional optimisation. No Free Lunch Analysis in adversarial (also called maximin)optimisation is a long-standing problem [ 45 , 46]. This paper first rigorously provesa (NFL) Theorem for general black-box adversarial optimisation when consideringNash Equilibrium (NE) as the solution concept. We emphasise the solution concept(i.e. define the optimality in adversarial optimisation) as the key in our NFL theo-rem. In particular, if the Nash Equilibrium is considered as the solution concept,then the average performance of all black-box adversarial optimisation algorithmsis the same. Moreover, we first introduce black-box complexity to analyse theblack-box adversarial optimisation algorithm. We employ Yao’s Principle andour new NFL Theorem to provide general lower bounds for query complexityof finding Nash Equilibrium in adversarial optimisation. Finally, we illustratethe practical ramifications of our results on simple two-player zero-sum games.More specifically, no black-box optimisation algorithm for finding the unique Nashequilibrium in two-player zero-sum games can exceed the logarithmic complexityrelative to search space size. Meanwhile, no black-box algorithm can solve anybimatrix game with unique NE faster than the linear query complexity in terms ofthe size of input payoff matrices
On the Impact of Mutation-Selection Balance on the Runtime of Evolutionary Algorithms
The interplay between mutation and selection plays a fundamental role in the
behaviour of evolutionary algorithms (EAs). However, this interplay is still
not completely understood. This paper presents a rigorous runtime analysis of a
non-elitist population-based EA that uses the linear ranking selection
mechanism. The analysis focuses on how the balance between parameter ,
controlling the selection pressure in linear ranking, and parameter
controlling the bit-wise mutation rate, impacts the runtime of the algorithm.
The results point out situations where a correct balance between selection
pressure and mutation rate is essential for finding the optimal solution in
polynomial time. In particular, it is shown that there exist fitness functions
which can only be solved in polynomial time if the ratio between parameters
and is within a narrow critical interval, and where a small
change in this ratio can increase the runtime exponentially. Furthermore, it is
shown quantitatively how the appropriate parameter choice depends on the
characteristics of the fitness function. In addition to the original results on
the runtime of EAs, this paper also introduces a very useful analytical tool,
i.e., multi-type branching processes, to the runtime analysis of non-elitist
population-based EAs
Concentration Tail-Bound Analysis of Coevolutionary and Bandit Learning Algorithms
Runtime analysis, as a branch of the theory of AI, studies how the number of iterations algorithms take before finding a solution (its runtime) depends on the design of the algorithm and the problem structure. Drift analysis is a state-of-the-art tool for estimating the runtime of randomised algorithms, such as bandit and evolutionary algorithms. Drift refers roughly to the expected progress towards the optimum per iteration. This paper considers the problem of deriving concentration tail-bounds on the runtime of algorithms. It provides a novel drift theorem that gives precise exponential tail-bounds given positive, weak, zero and even negative drift. Previously, such exponential tail bounds were missing in the case of weak, zero, or negative drift
Self-adaptation in non-elitist evolutionary algorithms on discrete problems with unknown structure
A key challenge to make effective use of evolutionary algorithms is to choose
appropriate settings for their parameters. However, the appropriate parameter
setting generally depends on the structure of the optimisation problem, which
is often unknown to the user. Non-deterministic parameter control mechanisms
adjust parameters using information obtained from the evolutionary process.
Self-adaptation -- where parameter settings are encoded in the chromosomes of
individuals and evolve through mutation and crossover -- is a popular parameter
control mechanism in evolutionary strategies. However, there is little
theoretical evidence that self-adaptation is effective, and self-adaptation has
largely been ignored by the discrete evolutionary computation community.
Here we show through a theoretical runtime analysis that a non-elitist,
discrete evolutionary algorithm which self-adapts its mutation rate not only
outperforms EAs which use static mutation rates on \leadingones, but also
improves asymptotically on an EA using a state-of-the-art control mechanism.
The structure of this problem depends on a parameter , which is \emph{a
priori} unknown to the algorithm, and which is needed to appropriately set a
fixed mutation rate. The self-adaptive EA achieves the same asymptotic runtime
as if this parameter was known to the algorithm beforehand, which is an
asymptotic speedup for this problem compared to all other EAs previously
studied. An experimental study of how the mutation-rates evolve show that they
respond adequately to a diverse range of problem structures.
These results suggest that self-adaptation should be adopted more broadly as
a parameter control mechanism in discrete, non-elitist evolutionary algorithms.Comment: To appear in IEEE Transactions of Evolutionary Computatio
No Free Lunch Theorem and Black-Box Complexity Analysis for Adversarial Optimisation
Black-box optimisation is one of the important areas in optimisation. The original No Free Lunch (NFL) theorems highlight the limitations of traditional black-box optimisation and learning algorithms, serving as a theoretical foundation for traditional optimisation. No Free Lunch Analysis in adversarial (also called maximin) optimisation is a long-standing problem [45 , 46]. This paper first rigorously proves a (NFL) Theorem for general black-box adversarial optimisation when considering Pure Strategy Nash Equilibrium (NE) as the solution concept. We emphasise the solution concept (i.e. define the optimality in adversarial optimisation) as the key in our NFL theorem. In particular, if Nash Equilibrium is considered as the solution concept and the cost of the algorithm is measured in terms of the number of columns and rows queried in the payoff matrix, then the average performance of all black-box adversarial optimisation algorithms is the same. Moreover, we first introduce black-box complexity to analyse the black-box adversarial optimisation algorithm. We employ Yao’s Principle and our new NFL Theorem to provide general lower bounds for the query complexity of finding a Nash Equilibrium in adversarial optimisation. Finally, we illustrate the practical ramifications of our results on simple two-player zero-sum games. More specifically, no black-box optimisation algorithm for finding the unique Nash equilibrium in two-player zero-sum games can exceed logarithmic complexity relative to search space size. Meanwhile, no black-box algorithm can solve any bimatrix game with unique NE with fewer than a linear number of queries in the size of the payoff matrix
- …