97 research outputs found
COCO: The Experimental Procedure
We present a budget-free experimental setup and procedure for benchmarking
numericaloptimization algorithms in a black-box scenario. This procedure can be
applied with the COCO benchmarking platform. We describe initialization of and
input to the algorithm and touch upon therelevance of termination and restarts.Comment: ArXiv e-prints, arXiv:1603.0877
Benchmarking Evolutionary Algorithms For Single Objective Real-valued Constrained Optimization - A Critical Review
Benchmarking plays an important role in the development of novel search
algorithms as well as for the assessment and comparison of contemporary
algorithmic ideas. This paper presents common principles that need to be taken
into account when considering benchmarking problems for constrained
optimization. Current benchmark environments for testing Evolutionary
Algorithms are reviewed in the light of these principles. Along with this line,
the reader is provided with an overview of the available problem domains in the
field of constrained benchmarking. Hence, the review supports algorithms
developers with information about the merits and demerits of the available
frameworks.Comment: This manuscript is a preprint version of an article published in
Swarm and Evolutionary Computation, Elsevier, 2018. Number of pages: 4
Challenges of ELA-guided Function Evolution using Genetic Programming
Within the optimization community, the question of how to generate new
optimization problems has been gaining traction in recent years. Within topics
such as instance space analysis (ISA), the generation of new problems can
provide new benchmarks which are not yet explored in existing research. Beyond
that, this function generation can also be exploited for solving complex
real-world optimization problems. By generating functions with similar
properties to the target problem, we can create a robust test set for algorithm
selection and configuration.
However, the generation of functions with specific target properties remains
challenging. While features exist to capture low-level landscape properties,
they might not always capture the intended high-level features. We show that a
genetic programming (GP) approach guided by these exploratory landscape
analysis (ELA) properties is not always able to find satisfying functions. Our
results suggest that careful considerations of the weighting of landscape
properties, as well as the distance measure used, might be required to evolve
functions that are sufficiently representative to the target landscape
Towards Dynamic Algorithm Selection for Numerical Black-Box Optimization: Investigating BBOB as a Use Case
One of the most challenging problems in evolutionary computation is to select
from its family of diverse solvers one that performs well on a given problem.
This algorithm selection problem is complicated by the fact that different
phases of the optimization process require different search behavior. While
this can partly be controlled by the algorithm itself, there exist large
differences between algorithm performance. It can therefore be beneficial to
swap the configuration or even the entire algorithm during the run. Long deemed
impractical, recent advances in Machine Learning and in exploratory landscape
analysis give hope that this dynamic algorithm configuration~(dynAC) can
eventually be solved by automatically trained configuration schedules. With
this work we aim at promoting research on dynAC, by introducing a simpler
variant that focuses only on switching between different algorithms, not
configurations. Using the rich data from the Black Box Optimization
Benchmark~(BBOB) platform, we show that even single-switch dynamic Algorithm
selection (dynAS) can potentially result in significant performance gains. We
also discuss key challenges in dynAS, and argue that the BBOB-framework can
become a useful tool in overcoming these
- …