4,590 research outputs found
Online Selection of CMA-ES Variants
In the field of evolutionary computation, one of the most challenging topics
is algorithm selection. Knowing which heuristics to use for which optimization
problem is key to obtaining high-quality solutions. We aim to extend this
research topic by taking a first step towards a selection method for adaptive
CMA-ES algorithms. We build upon the theoretical work done by van Rijn
\textit{et al.} [PPSN'18], in which the potential of switching between
different CMA-ES variants was quantified in the context of a modular CMA-ES
framework.
We demonstrate in this work that their proposed approach is not very
reliable, in that implementing the suggested adaptive configurations does not
yield the predicted performance gains. We propose a revised approach, which
results in a more robust fit between predicted and actual performance. The
adaptive CMA-ES approach obtains performance gains on 18 out of 24 tested
functions of the BBOB benchmark, with stable advantages of up to 23\%. An
analysis of module activation indicates which modules are most crucial for the
different phases of optimizing each of the 24 benchmark problems. The module
activation also suggests that additional gains are possible when including the
(B)IPOP modules, which we have excluded for this present work.Comment: To appear at Genetic and Evolutionary Computation Conference
(GECCO'19) Appendix will be added in due tim
Runtime Analysis for Self-adaptive Mutation Rates
We propose and analyze a self-adaptive version of the
evolutionary algorithm in which the current mutation rate is part of the
individual and thus also subject to mutation. A rigorous runtime analysis on
the OneMax benchmark function reveals that a simple local mutation scheme for
the rate leads to an expected optimization time (number of fitness evaluations)
of when is at least for
some constant . For all values of , this
performance is asymptotically best possible among all -parallel
mutation-based unbiased black-box algorithms.
Our result shows that self-adaptation in evolutionary computation can find
complex optimal parameter settings on the fly. At the same time, it proves that
a relatively complicated self-adjusting scheme for the mutation rate proposed
by Doerr, Gie{\ss}en, Witt, and Yang~(GECCO~2017) can be replaced by our simple
endogenous scheme.
On the technical side, the paper contributes new tools for the analysis of
two-dimensional drift processes arising in the analysis of dynamic parameter
choices in EAs, including bounds on occupation probabilities in processes with
non-constant drift
Grey-box model identification via evolutionary computing
This paper presents an evolutionary grey-box model identification methodology that makes the best use of a priori knowledge on
a clear-box model with a global structural representation of the physical system under study, whilst incorporating accurate blackbox
models for immeasurable and local nonlinearities of a practical system. The evolutionary technique is applied to building
dominant structural identification with local parametric tuning without the need of a differentiable performance index in the
presence of noisy data. It is shown that the evolutionary technique provides an excellent fitting performance and is capable of
accommodating multiple objectives such as to examine the relationships between model complexity and fitting accuracy during the
model building process. Validation results show that the proposed method offers robust, uncluttered and accurate models for two
practical systems. It is expected that this type of grey-box models will accommodate many practical engineering systems for a better
modelling accuracy
On the Runtime of Randomized Local Search and Simple Evolutionary Algorithms for Dynamic Makespan Scheduling
Evolutionary algorithms have been frequently used for dynamic optimization
problems. With this paper, we contribute to the theoretical understanding of
this research area. We present the first computational complexity analysis of
evolutionary algorithms for a dynamic variant of a classical combinatorial
optimization problem, namely makespan scheduling. We study the model of a
strong adversary which is allowed to change one job at regular intervals.
Furthermore, we investigate the setting of random changes. Our results show
that randomized local search and a simple evolutionary algorithm are very
effective in dynamically tracking changes made to the problem instance.Comment: Conference version appears at IJCAI 201
Hitting time for the continuous quantum walk
We define the hitting (or absorbing) time for the case of continuous quantum
walks by measuring the walk at random times, according to a Poisson process
with measurement rate . From this definition we derive an explicit
formula for the hitting time, and explore its dependence on the measurement
rate. As the measurement rate goes to either 0 or infinity the hitting time
diverges; the first divergence reflects the weakness of the measurement, while
the second limit results from the Quantum Zeno effect. Continuous-time quantum
walks, like discrete-time quantum walks but unlike classical random walks, can
have infinite hitting times. We present several conditions for existence of
infinite hitting times, and discuss the connection between infinite hitting
times and graph symmetry.Comment: 12 pages, 1figur
Average Convergence Rate of Evolutionary Algorithms
In evolutionary optimization, it is important to understand how fast
evolutionary algorithms converge to the optimum per generation, or their
convergence rate. This paper proposes a new measure of the convergence rate,
called average convergence rate. It is a normalised geometric mean of the
reduction ratio of the fitness difference per generation. The calculation of
the average convergence rate is very simple and it is applicable for most
evolutionary algorithms on both continuous and discrete optimization. A
theoretical study of the average convergence rate is conducted for discrete
optimization. Lower bounds on the average convergence rate are derived. The
limit of the average convergence rate is analysed and then the asymptotic
average convergence rate is proposed
- …