4 research outputs found
Towards Dynamic Algorithm Selection for Numerical Black-Box Optimization: Investigating BBOB as a Use Case
One of the most challenging problems in evolutionary computation is to select
from its family of diverse solvers one that performs well on a given problem.
This algorithm selection problem is complicated by the fact that different
phases of the optimization process require different search behavior. While
this can partly be controlled by the algorithm itself, there exist large
differences between algorithm performance. It can therefore be beneficial to
swap the configuration or even the entire algorithm during the run. Long deemed
impractical, recent advances in Machine Learning and in exploratory landscape
analysis give hope that this dynamic algorithm configuration~(dynAC) can
eventually be solved by automatically trained configuration schedules. With
this work we aim at promoting research on dynAC, by introducing a simpler
variant that focuses only on switching between different algorithms, not
configurations. Using the rich data from the Black Box Optimization
Benchmark~(BBOB) platform, we show that even single-switch dynamic Algorithm
selection (dynAS) can potentially result in significant performance gains. We
also discuss key challenges in dynAS, and argue that the BBOB-framework can
become a useful tool in overcoming these
CMA-ES with Learning Rate Adaptation: Can CMA-ES with Default Population Size Solve Multimodal and Noisy Problems?
The covariance matrix adaptation evolution strategy (CMA-ES) is one of the
most successful methods for solving black-box continuous optimization problems.
One practically useful aspect of the CMA-ES is that it can be used without
hyperparameter tuning. However, the hyperparameter settings still have a
considerable impact, especially for difficult tasks such as solving multimodal
or noisy problems. In this study, we investigate whether the CMA-ES with
default population size can solve multimodal and noisy problems. To perform
this investigation, we develop a novel learning rate adaptation mechanism for
the CMA-ES, such that the learning rate is adapted so as to maintain a constant
signal-to-noise ratio. We investigate the behavior of the CMA-ES with the
proposed learning rate adaptation mechanism through numerical experiments, and
compare the results with those obtained for the CMA-ES with a fixed learning
rate. The results demonstrate that, when the proposed learning rate adaptation
is used, the CMA-ES with default population size works well on multimodal
and/or noisy problems, without the need for extremely expensive learning rate
tuning.Comment: Nominated for the best paper of GECCO'23 ENUM Track. We have
corrected the error of Eq.(7
Two-stage methods for multimodal optimization
Für viele praktische Optimierungsprobleme ist es ratsam nicht nur eine einzelne optimale Lösung zu suchen, sondern eine Menge von Lösungen die gut und untereinander verschieden sind.
Die Argumentation hinter dieser Meinung ist, dass ein Entscheidungsträger möglicherweise nachträglich zusätzliche Kriterien einbringen möchte, die nicht im Optimierungsproblem enthalten waren.
Gründe für die Nichtberücksichtigung im Optimierungsproblem sind zum Beispiel dass das notwendige Expertenwissen noch nicht formalisiert wurde, oder dass die Bewertung der Zusatzkriterien mehr oder weniger subjektiv abläuft.
Das Forschungsgebiet für diese einkriteriellen Optimierungsprobleme mit Bedarf für eine Menge von mehreren Lösungen wird momentan mit dem Begriff multimodale Optimierung umschrieben.
In dieser Arbeit wenden wir zweistufige Optimieralgorithmen, die aus sich abwechselnden globalen und lokalen Komponenten bestehen, auf diese Probleme an.
Diese Algorithmen sind attraktiv für uns wegen ihrer Einfachheit und ihrer belegten Leistungsfähigkeit auf multimodalen Problemen.
Das Hauptaugenmerk liegt darauf, die globale Phase zu verbessern, da lokale Suche schon ein gut erforschtes Themengebiet ist.
Wir tun dies, indem wir vorher ausgewertete Punkte und bereits bekannte Optima in unserem globalen Samplingalgorithmus berücksichtigen.
Unser Ansatz basiert auf der Maximierung der minimalen Distanz in einer Punktmenge, während Kanteneffekte, welche durch die Beschränktheit des Suchraums verursacht werden, durch geeignete Korrekturmaßnahmen verhindert werden.
Experimente bestätigen die Überlegenheit dieses Algorithmus gegenüber zufällig gleichverteiltem Sampling und anderen Methoden in diversen Problemstellungen multimodaler Optimierung.For many practical optimization problems it seems advisable to seek not only a single optimal solution, but a diverse set of good solutions.
The rationale behind this opinion is that a decision maker may want to consider additional criteria, which are not included in the optimization problem itself.
Reasons for not including them are for example that the expert knowledge constituting the additional criteria has not been formalized or that the evaluation of the additional criteria is more or less subjective.
The area containing single-objective problems with the need to identify a set of solutions is currently called multimodal optimization.
In this work, we apply two-stage optimization algorithms, which consist of alternating global and local searches, to these problems.
These algorithms are attractive because of their simplicity and their demonstrated performance on multimodal problems.
The main focus is on improving the global stages, as local search is already a thoroughly investigated topic.
This is done by considering previously sampled points and found optima in the global sampling, thus obtaining a super-uniform distribution.
The approach is based on maximizing the minimal distance in a point set, while boundary effects of the box-constrained search space are avoided by correction methods.
Experiments confirm the superiority of this algorithm over random uniform sampling and other methods in various different settings of multimodal optimization