7,664 research outputs found
Landscape-Aware Fixed-Budget Performance Regression and Algorithm Selection for Modular CMA-ES Variants
Automated algorithm selection promises to support the user in the decisive
task of selecting a most suitable algorithm for a given problem. A common
component of these machine-trained techniques are regression models which
predict the performance of a given algorithm on a previously unseen problem
instance. In the context of numerical black-box optimization, such regression
models typically build on exploratory landscape analysis (ELA), which
quantifies several characteristics of the problem. These measures can be used
to train a supervised performance regression model.
First steps towards ELA-based performance regression have been made in the
context of a fixed-target setting. In many applications, however, the user
needs to select an algorithm that performs best within a given budget of
function evaluations. Adopting this fixed-budget setting, we demonstrate that
it is possible to achieve high-quality performance predictions with
off-the-shelf supervised learning approaches, by suitably combining two
differently trained regression models. We test this approach on a very
challenging problem: algorithm selection on a portfolio of very similar
algorithms, which we choose from the family of modular CMA-ES algorithms.Comment: To appear in Proc. of Genetic and Evolutionary Computation Conference
(GECCO'20
Challenges of ELA-guided Function Evolution using Genetic Programming
Within the optimization community, the question of how to generate new
optimization problems has been gaining traction in recent years. Within topics
such as instance space analysis (ISA), the generation of new problems can
provide new benchmarks which are not yet explored in existing research. Beyond
that, this function generation can also be exploited for solving complex
real-world optimization problems. By generating functions with similar
properties to the target problem, we can create a robust test set for algorithm
selection and configuration.
However, the generation of functions with specific target properties remains
challenging. While features exist to capture low-level landscape properties,
they might not always capture the intended high-level features. We show that a
genetic programming (GP) approach guided by these exploratory landscape
analysis (ELA) properties is not always able to find satisfying functions. Our
results suggest that careful considerations of the weighting of landscape
properties, as well as the distance measure used, might be required to evolve
functions that are sufficiently representative to the target landscape
Tools for Landscape Analysis of Optimisation Problems in Procedural Content Generation for Games
The term Procedural Content Generation (PCG) refers to the (semi-)automatic
generation of game content by algorithmic means, and its methods are becoming
increasingly popular in game-oriented research and industry. A special class of
these methods, which is commonly known as search-based PCG, treats the given
task as an optimisation problem. Such problems are predominantly tackled by
evolutionary algorithms.
We will demonstrate in this paper that obtaining more information about the
defined optimisation problem can substantially improve our understanding of how
to approach the generation of content. To do so, we present and discuss three
efficient analysis tools, namely diagonal walks, the estimation of high-level
properties, as well as problem similarity measures. We discuss the purpose of
each of the considered methods in the context of PCG and provide guidelines for
the interpretation of the results received. This way we aim to provide methods
for the comparison of PCG approaches and eventually, increase the quality and
practicality of generated content in industry.Comment: 30 pages, 8 figures, accepted for publication in Applied Soft
Computin
Towards Dynamic Algorithm Selection for Numerical Black-Box Optimization: Investigating BBOB as a Use Case
One of the most challenging problems in evolutionary computation is to select
from its family of diverse solvers one that performs well on a given problem.
This algorithm selection problem is complicated by the fact that different
phases of the optimization process require different search behavior. While
this can partly be controlled by the algorithm itself, there exist large
differences between algorithm performance. It can therefore be beneficial to
swap the configuration or even the entire algorithm during the run. Long deemed
impractical, recent advances in Machine Learning and in exploratory landscape
analysis give hope that this dynamic algorithm configuration~(dynAC) can
eventually be solved by automatically trained configuration schedules. With
this work we aim at promoting research on dynAC, by introducing a simpler
variant that focuses only on switching between different algorithms, not
configurations. Using the rich data from the Black Box Optimization
Benchmark~(BBOB) platform, we show that even single-switch dynamic Algorithm
selection (dynAS) can potentially result in significant performance gains. We
also discuss key challenges in dynAS, and argue that the BBOB-framework can
become a useful tool in overcoming these
Algorithm Instance Footprint: Separating Easily Solvable and Challenging Problem Instances
In black-box optimization, it is essential to understand why an algorithm
instance works on a set of problem instances while failing on others and
provide explanations of its behavior. We propose a methodology for formulating
an algorithm instance footprint that consists of a set of problem instances
that are easy to be solved and a set of problem instances that are difficult to
be solved, for an algorithm instance. This behavior of the algorithm instance
is further linked to the landscape properties of the problem instances to
provide explanations of which properties make some problem instances easy or
challenging. The proposed methodology uses meta-representations that embed the
landscape properties of the problem instances and the performance of the
algorithm into the same vector space. These meta-representations are obtained
by training a supervised machine learning regression model for algorithm
performance prediction and applying model explainability techniques to assess
the importance of the landscape features to the performance predictions. Next,
deterministic clustering of the meta-representations demonstrates that using
them captures algorithm performance across the space and detects regions of
poor and good algorithm performance, together with an explanation of which
landscape properties are leading to it.Comment: To appear at GECCO 202
- …