7 research outputs found

    Tuning as a means of assessing the benefits of new ideas in interplay with existing algorithmic modules

    Get PDF
    Introducing new algorithmic ideas is a key part of the continuous improvement of existing optimization algorithms. However, when introducing a new component into an existing algorithm, assessing its potential benefits is a challenging task. Often, the component is added to a default implementation of the underlying algorithm and compared against a limited set of other variants. This assessment ignores any potential interplay with other algorithmic ideas that share the same base algorithm, which is critical in understanding the exact contributions being made. We explore a more extensive procedure, which uses hyperparameter tuning as a means of assessing the benefits of new algorithmic components. This allows for a more robust analysis by not only focusing on the impact on performance, but also by investigating how this performance is achieved. We implement our suggestion in the context of the Modular CMA-ES framework, which was redesigned and extended to include some new modules and several new options for existing modules, mostly focused on the step-size adaptation method. Our analysis highlights the differences between these new modules, and identifies the situations in which they have the largest contribution.Algorithms and the Foundations of Software technolog

    Explorative data analysis of time series based algorithm features of CMA-ES variants

    Get PDF
    Algorithms and the Foundations of Software technolog

    Towards Dynamic Algorithm Selection for Numerical Black-Box Optimization: Investigating BBOB as a Use Case

    Get PDF
    One of the most challenging problems in evolutionary computation is to select from its family of diverse solvers one that performs well on a given problem. This algorithm selection problem is complicated by the fact that different phases of the optimization process require different search behavior. While this can partly be controlled by the algorithm itself, there exist large differences between algorithm performance. It can therefore be beneficial to swap the configuration or even the entire algorithm during the run. Long deemed impractical, recent advances in Machine Learning and in exploratory landscape analysis give hope that this dynamic algorithm configuration~(dynAC) can eventually be solved by automatically trained configuration schedules. With this work we aim at promoting research on dynAC, by introducing a simpler variant that focuses only on switching between different algorithms, not configurations. Using the rich data from the Black Box Optimization Benchmark~(BBOB) platform, we show that even single-switch dynamic Algorithm selection (dynAS) can potentially result in significant performance gains. We also discuss key challenges in dynAS, and argue that the BBOB-framework can become a useful tool in overcoming these

    Projection-Based Restricted Covariance Matrix Adaptation for High Dimension

    Get PDF
    International audienceWe propose a novel variant of the covariance matrix adaptation evolution strategy (CMA-ES) using a covariance matrix parameterized with a smaller number of parameters. The motivation of a restricted covariance matrix is twofold. First, it requires less internal time and space complexity that is desired when optimizing a function on a high dimensional search space. Second, it requires less function evaluations to adapt the covariance matrix if the restricted covariance matrix is rich enough to express the variable dependencies of the problem. In this paper we derive a computationally efficient way to update the restricted covariance matrix where the model richness of the covariance matrix is controlled by an integer and the internal complexity per function evaluation is linear in this integer times the dimension, compared to quadratic in the dimension in the CMA-ES. We prove that the proposed algorithm is equivalent to the sep-CMA-ES if the covariance matrix is restricted to the diagonal matrix, it is equivalent to the original CMA-ES if the matrix is not restricted. Experimental results reveal the class of efficiently solvable functions depending on the model richness of the covariance matrix and the speedup over the CMA-ES

    Benchmarking IPOP-CMA-ES-TPA and IPOP-CMA-ES-MSR on the BBOB Noiseless Testbed

    Get PDF
    International audienceWe benchmark IPOP-CMA-ES, a restart Covariance Matrix Adaptation Evolution Strategy with increasing population size, with two step-size adaptation mechanisms, Two-Point Step-Size Adapation (TPA) and Median Success Rule (MSR), on the BBOB noiseless testbed. We then compare IPOP-CMA-ES-TPA and IPOP-CMA-ES-MSR to IPOP-CMA-ES with the standard step-size adaptation mechanism, Cumulative Step-size Adaptation (CSA). We conduct experiments for a budget of 10 5 times the dimension of the search space. As expected, the algorithms perform alike on most functions. However, we observe some relevant differences , the most significant being on the attractive sector function where IPOP-CMA-TPA and IPOP-CMA-CSA out-perform IPOP-CMA-MSR, and on the Rastrigin function where IPOP-CMA-MSR is the only algorithm to solve the function in all tested dimensions. We also observe that at least one of the three algorithms is comparable to the best BBOB-09 artificial algorithm on 13 functions
    corecore