3 research outputs found

    Limitations of benchmark sets and landscape features for algorithm selection and performance prediction.

    Get PDF
    Benchmark sets and landscape features are used to test algorithms and to train models to perform algorithm selection or configuration. These approaches are based on the assumption that algorithms have similar performances on problems with similar feature sets. In this paper, we test different configurations of differential evolution (DE) against the BBOB set. We then use the landscape features of those problems and a case base reasoning approach for DE configuration selection. We show that, although this method obtains good results for BBOB problems, it fails to select the best configurations when facing a new set of optimisation problems with a distinct array of landscape features. This demonstrates the limitations of the BBOB set for algorithm selection. Moreover, by examination of the relationship between features and algorithm performance, we show that there is no correlation between the feature space and the performance space. We conclude by identifying some important open questions raised by this work

    Landscape-Aware Fixed-Budget Performance Regression and Algorithm Selection for Modular CMA-ES Variants

    Full text link
    Automated algorithm selection promises to support the user in the decisive task of selecting a most suitable algorithm for a given problem. A common component of these machine-trained techniques are regression models which predict the performance of a given algorithm on a previously unseen problem instance. In the context of numerical black-box optimization, such regression models typically build on exploratory landscape analysis (ELA), which quantifies several characteristics of the problem. These measures can be used to train a supervised performance regression model. First steps towards ELA-based performance regression have been made in the context of a fixed-target setting. In many applications, however, the user needs to select an algorithm that performs best within a given budget of function evaluations. Adopting this fixed-budget setting, we demonstrate that it is possible to achieve high-quality performance predictions with off-the-shelf supervised learning approaches, by suitably combining two differently trained regression models. We test this approach on a very challenging problem: algorithm selection on a portfolio of very similar algorithms, which we choose from the family of modular CMA-ES algorithms.Comment: To appear in Proc. of Genetic and Evolutionary Computation Conference (GECCO'20

    Towards Dynamic Algorithm Selection for Numerical Black-Box Optimization: Investigating BBOB as a Use Case

    Get PDF
    One of the most challenging problems in evolutionary computation is to select from its family of diverse solvers one that performs well on a given problem. This algorithm selection problem is complicated by the fact that different phases of the optimization process require different search behavior. While this can partly be controlled by the algorithm itself, there exist large differences between algorithm performance. It can therefore be beneficial to swap the configuration or even the entire algorithm during the run. Long deemed impractical, recent advances in Machine Learning and in exploratory landscape analysis give hope that this dynamic algorithm configuration~(dynAC) can eventually be solved by automatically trained configuration schedules. With this work we aim at promoting research on dynAC, by introducing a simpler variant that focuses only on switching between different algorithms, not configurations. Using the rich data from the Black Box Optimization Benchmark~(BBOB) platform, we show that even single-switch dynamic Algorithm selection (dynAS) can potentially result in significant performance gains. We also discuss key challenges in dynAS, and argue that the BBOB-framework can become a useful tool in overcoming these
    corecore