16 research outputs found

    Benchmarking the NEWUOA on the BBOB-2009 Function Testbed

    Get PDF
    International audienceThe NEWUOA which belongs to the class of Derivative-Free optimization algorithms is benchmarked on the BBOB-2009 noisefree testbed. A multistart strategy is applied with a maximum number of function evaluations of up to 10^5 times the search space dimension resulting in the algorithm solving 11 functions in 20-D. The results of the algorithm using the recommended number of interpolation points for the underlying model and the full model are shown and discussed

    Comparing Results of 31 Algorithms from the Black-Box Optimization Benchmarking BBOB-2009

    Get PDF
    pp. 1689-1696This paper presents results of the BBOB-2009 benchmark- ing of 31 search algorithms on 24 noiseless functions in a black-box optimization scenario in continuous domain. The runtime of the algorithms, measured in number of function evaluations, is investigated and a connection between a sin- gle convergence graph and the runtime distribution is uncov- ered. Performance is investigated for different dimensions up to 40-D, for different target precision values, and in dif- ferent subgroups of functions. Searching in larger dimension and multi-modal functions appears to be more difficult. The choice of the best algorithm also depends remarkably on the available budget of function evaluations

    Benchmarking the Local Metamodel CMA-ES on the Noiseless BBOB'2013 Test Bed

    Get PDF
    International audienceThis paper evaluates the performance of a variant of the local-meta-model CMA-ES (lmm-CMA) in the BBOB 2013 expensive setting. The lmm-CMA is a surrogate variant of the CMA-ES algorithm. Function evaluations are saved by building, with weighted regression, full quadratic metamodels to estimate the candidate solutions' function values. The quality of the approximation is appraised by checking how much the predicted rank changes when evaluating a fraction of the candidate solutions on the original objective function. The results are compared with the CMA-ES without meta-modeling and with previously benchmarked algorithms, namely BFGS, NEWUOA and saACM. It turns out that the additional meta-modeling improves the performance of CMA-ES on almost all BBOB functions while giving significantly worse results only on the attractive sector function. Over all functions, the performance is comparable with saACM and the lmm-CMA often outperforms NEWUOA and BFGS starting from about 2D^2 function evaluations with D being the search space dimension

    The Hessian Estimation Evolution Strategy

    Full text link
    We present a novel black box optimization algorithm called Hessian Estimation Evolution Strategy. The algorithm updates the covariance matrix of its sampling distribution by directly estimating the curvature of the objective function. This algorithm design is targeted at twice continuously differentiable problems. For this, we extend the cumulative step-size adaptation algorithm of the CMA-ES to mirrored sampling. We demonstrate that our approach to covariance matrix adaptation is efficient by evaluation it on the BBOB/COCO testbed. We also show that the algorithm is surprisingly robust when its core assumption of a twice continuously differentiable objective function is violated. The approach yields a new evolution strategy with competitive performance, and at the same time it also offers an interesting alternative to the usual covariance matrix update mechanism

    Analysis of Different Types of Regret in Continuous Noisy Optimization

    Get PDF
    The performance measure of an algorithm is a crucial part of its analysis. The performance can be determined by the study on the convergence rate of the algorithm in question. It is necessary to study some (hopefully convergent) sequence that will measure how "good" is the approximated optimum compared to the real optimum. The concept of Regret is widely used in the bandit literature for assessing the performance of an algorithm. The same concept is also used in the framework of optimization algorithms, sometimes under other names or without a specific name. And the numerical evaluation of convergence rate of noisy algorithms often involves approximations of regrets. We discuss here two types of approximations of Simple Regret used in practice for the evaluation of algorithms for noisy optimization. We use specific algorithms of different nature and the noisy sphere function to show the following results. The approximation of Simple Regret, termed here Approximate Simple Regret, used in some optimization testbeds, fails to estimate the Simple Regret convergence rate. We also discuss a recent new approximation of Simple Regret, that we term Robust Simple Regret, and show its advantages and disadvantages.Comment: Genetic and Evolutionary Computation Conference 2016, Jul 2016, Denver, United States. 201

    LABCAT: Locally adaptive Bayesian optimization using principal component-aligned trust regions

    Full text link
    Bayesian optimization (BO) is a popular method for optimizing expensive black-box functions. BO has several well-documented shortcomings, including computational slowdown with longer optimization runs, poor suitability for non-stationary or ill-conditioned objective functions, and poor convergence characteristics. Several algorithms have been proposed that incorporate local strategies, such as trust regions, into BO to mitigate these limitations; however, none address all of them satisfactorily. To address these shortcomings, we propose the LABCAT algorithm, which extends trust-region-based BO by adding principal-component-aligned rotation and an adaptive rescaling strategy based on the length-scales of a local Gaussian process surrogate model with automatic relevance determination. Through extensive numerical experiments using a set of synthetic test functions and the well-known COCO benchmarking software, we show that the LABCAT algorithm outperforms several state-of-the-art BO and other black-box optimization algorithms

    関数最適化問題に対する適応型差分進化法の研究

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学准教授 福永 アレックス, 東京大学教授 池上 高志, 東京大学教授 植田 一博, 東京大学教授 山口 泰, 東京大学教授 伊庭 斉志University of Tokyo(東京大学
    corecore