97 research outputs found

    COCO: The Experimental Procedure

    Get PDF
    We present a budget-free experimental setup and procedure for benchmarking numericaloptimization algorithms in a black-box scenario. This procedure can be applied with the COCO benchmarking platform. We describe initialization of and input to the algorithm and touch upon therelevance of termination and restarts.Comment: ArXiv e-prints, arXiv:1603.0877

    COCO: Performance Assessment

    Full text link
    We present an any-time performance assessment for benchmarking numerical optimization algorithms in a black-box scenario, applied within the COCO benchmarking platform. The performance assessment is based on runtimes measured in number of objective function evaluations to reach one or several quality indicator target values. We argue that runtime is the only available measure with a generic, meaningful, and quantitative interpretation. We discuss the choice of the target values, runlength-based targets, and the aggregation of results by using simulated restarts, averages, and empirical distribution functions

    Comparing Mirrored Mutations and Active Covariance Matrix Adaptation in the IPOP-CMA-ES on the Noiseless BBOB Testbed

    Get PDF
    International audienceThis paper investigates two variants of the well-known Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Active covariance matrix adaptation allows for negative weights in the covariance matrix update rule such that "bad" steps are (actively) taken into account when updating the covariance matrix of the sample distribution. On the other hand, mirrored mutations via selective mirroring also take the "bad" steps into account. In this case, they are first evaluated when taken in the opposite direction (mirrored) and then considered for regular selection. In this study, we investigate the difference between the performance of the two variants empirically on the noiseless BBOB testbed. The CMA-ES with selectively mirrored mutations only outperforms the active CMA-ES on the sphere function while the active variant statistically significantly outperforms mirrored mutations on 10 of 24 functions in several dimensions

    On the Effect of Mirroring in the IPOP Active CMA-ES on the Noiseless BBOB Testbed

    Get PDF
    International audienceMirrored mutations and active covariance matrix adaptation are two recent ideas to improve the well-known covariance matrix adaptation evolution strategy (CMA-ES)---a state-of-the-art algorithm for numerical optimization. It turns out that both mechanisms can be implemented simultaneously. In this paper, we investigate the impact of mirrored mutations on the so-called IPOP active CMA-ES. We find that additional mirrored mutations improve the IPOP active CMA-ES statistically significantly, but by only a small margin, on several functions while never a statistically significant performance decline can be observed. Furthermore, experiments on different function instances with some algorithm parameters and stopping criteria changed reveal essentially the same results

    On the Impact of a Small Initial Population Size in the IPOP Active CMA-ES with Mirrored Mutations on the Noiseless BBOB Testbed

    Get PDF
    International audienceActive Covariance Matrix Adaptation and Mirrored Mutations have been independently proposed as improved variants of the well-known optimization algorithm Covariance Matrix Adaptation Evolution Strategy (CMA-ES) for numerical optimization. This paper investigates the impact of the algorithm's population size when both active covariance matrix adaptation and mirrored mutation are used in the CMA-ES. To this end, we compare the CMA-ES with standard population size λ\lambda, i.e., λ=4+3log(D)\lambda = 4 + \lfloor 3\log(D) \rfloor with a version with half this population size where DD is the problem dimension

    Benchmarking the Local Metamodel CMA-ES on the Noiseless BBOB'2013 Test Bed

    Get PDF
    International audienceThis paper evaluates the performance of a variant of the local-meta-model CMA-ES (lmm-CMA) in the BBOB 2013 expensive setting. The lmm-CMA is a surrogate variant of the CMA-ES algorithm. Function evaluations are saved by building, with weighted regression, full quadratic metamodels to estimate the candidate solutions' function values. The quality of the approximation is appraised by checking how much the predicted rank changes when evaluating a fraction of the candidate solutions on the original objective function. The results are compared with the CMA-ES without meta-modeling and with previously benchmarked algorithms, namely BFGS, NEWUOA and saACM. It turns out that the additional meta-modeling improves the performance of CMA-ES on almost all BBOB functions while giving significantly worse results only on the attractive sector function. Over all functions, the performance is comparable with saACM and the lmm-CMA often outperforms NEWUOA and BFGS starting from about 2D^2 function evaluations with D being the search space dimension

    On the Impact of Active Covariance Matrix Adaptation in the CMA-ES With Mirrored Mutations and Small Initial Population Size on the Noiseless BBOB Testbed

    Get PDF
    International audienceMirrored mutations as well as active covariance matrix adaptation are two techniques that have been introduced into the well-known CMA-ES algorithm for numerical optimization. Here, we investigate the impact of active covariance matrix adaptation in the IPOP-CMA-ES with mirrored mutation and a small initial population size. Active covariance matrix adaptation improves the performance on 8 of the 24 benchmark functions of the noiseless BBOB test bed. The effect is the largest on the ill-conditioned functions with the largest improvement on the discus function where the expected runtime is more than halved. On the other hand, no statistically significant adverse effects can be observed

    Biobjective Performance Assessment with the COCO Platform

    Full text link
    This document details the rationales behind assessing the performance of numerical black-box optimizers on multi-objective problems within the COCO platform and in particular on the biobjective test suite bbob-biobj. The evaluation is based on a hypervolume of all non-dominated solutions in the archive of candidate solutions and measures the runtime until the hypervolume value succeeds prescribed target values

    COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting

    Get PDF
    We introduce COCO, an open source platform for Comparing Continuous Optimizers in a black-box setting. COCO aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. The platform and the underlying methodology allow to benchmark in the same framework deterministic and stochastic solvers for both single and multiobjective optimization. We present the rationales behind the (decade-long) development of the platform as a general proposition for guidelines towards better benchmarking. We detail underlying fundamental concepts of COCO such as the definition of a problem as a function instance, the underlying idea of instances, the use of target values, and runtime defined by the number of function calls as the central performance measure. Finally, we give a quick overview of the basic code structure and the currently available test suites.Comment: Optimization Methods and Software, Taylor & Francis, In press, pp.1-3
    corecore