4 research outputs found
Analysis of Different Types of Regret in Continuous Noisy Optimization
The performance measure of an algorithm is a crucial part of its analysis.
The performance can be determined by the study on the convergence rate of the
algorithm in question. It is necessary to study some (hopefully convergent)
sequence that will measure how "good" is the approximated optimum compared to
the real optimum. The concept of Regret is widely used in the bandit literature
for assessing the performance of an algorithm. The same concept is also used in
the framework of optimization algorithms, sometimes under other names or
without a specific name. And the numerical evaluation of convergence rate of
noisy algorithms often involves approximations of regrets. We discuss here two
types of approximations of Simple Regret used in practice for the evaluation of
algorithms for noisy optimization. We use specific algorithms of different
nature and the noisy sphere function to show the following results. The
approximation of Simple Regret, termed here Approximate Simple Regret, used in
some optimization testbeds, fails to estimate the Simple Regret convergence
rate. We also discuss a recent new approximation of Simple Regret, that we term
Robust Simple Regret, and show its advantages and disadvantages.Comment: Genetic and Evolutionary Computation Conference 2016, Jul 2016,
Denver, United States. 201
Sorting by Swaps with Noisy Comparisons
We study sorting of permutations by random swaps if each comparison gives the
wrong result with some fixed probability . We use this process as
prototype for the behaviour of randomized, comparison-based optimization
heuristics in the presence of noisy comparisons. As quality measure, we compute
the expected fitness of the stationary distribution. To measure the runtime, we
compute the minimal number of steps after which the average fitness
approximates the expected fitness of the stationary distribution.
We study the process where in each round a random pair of elements at
distance at most are compared. We give theoretical results for the extreme
cases and , and experimental results for the intermediate cases. We
find a trade-off between faster convergence (for large ) and better quality
of the solution after convergence (for small ).Comment: An extended abstract of this paper has been presented at Genetic and
Evolutionary Computation Conference (GECCO 2017
Analysis of Evolutionary Algorithms in Dynamic and Stochastic Environments
Many real-world optimization problems occur in environments that change
dynamically or involve stochastic components. Evolutionary algorithms and other
bio-inspired algorithms have been widely applied to dynamic and stochastic
problems. This survey gives an overview of major theoretical developments in
the area of runtime analysis for these problems. We review recent theoretical
studies of evolutionary algorithms and ant colony optimization for problems
where the objective functions or the constraints change over time. Furthermore,
we consider stochastic problems under various noise models and point out some
directions for future research.Comment: This book chapter is to appear in the book "Theory of Randomized
Search Heuristics in Discrete Search Spaces", which is edited by Benjamin
Doerr and Frank Neumann and is scheduled to be published by Springer in 201