42 research outputs found

    Self-Adaptive Surrogate-Assisted Covariance Matrix Adaptation Evolution Strategy

    Get PDF
    This paper presents a novel mechanism to adapt surrogate-assisted population-based algorithms. This mechanism is applied to ACM-ES, a recently proposed surrogate-assisted variant of CMA-ES. The resulting algorithm, saACM-ES, adjusts online the lifelength of the current surrogate model (the number of CMA-ES generations before learning a new surrogate) and the surrogate hyper-parameters. Both heuristics significantly improve the quality of the surrogate model, yielding a significant speed-up of saACM-ES compared to the ACM-ES and CMA-ES baselines. The empirical validation of saACM-ES on the BBOB-2012 noiseless testbed demonstrates the efficiency and the scalability w.r.t the problem dimension and the population size of the proposed approach, that reaches new best results on some of the benchmark problems.Comment: Genetic and Evolutionary Computation Conference (GECCO 2012) (2012

    Maximum Likelihood-based Online Adaptation of Hyper-parameters in CMA-ES

    Get PDF
    The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is widely accepted as a robust derivative-free continuous optimization algorithm for non-linear and non-convex optimization problems. CMA-ES is well known to be almost parameterless, meaning that only one hyper-parameter, the population size, is proposed to be tuned by the user. In this paper, we propose a principled approach called self-CMA-ES to achieve the online adaptation of CMA-ES hyper-parameters in order to improve its overall performance. Experimental results show that for larger-than-default population size, the default settings of hyper-parameters of CMA-ES are far from being optimal, and that self-CMA-ES allows for dynamically approaching optimal settings.Comment: 13th International Conference on Parallel Problem Solving from Nature (PPSN 2014) (2014

    On the Effect of Mirroring in the IPOP Active CMA-ES on the Noiseless BBOB Testbed

    Get PDF
    International audienceMirrored mutations and active covariance matrix adaptation are two recent ideas to improve the well-known covariance matrix adaptation evolution strategy (CMA-ES)---a state-of-the-art algorithm for numerical optimization. It turns out that both mechanisms can be implemented simultaneously. In this paper, we investigate the impact of mirrored mutations on the so-called IPOP active CMA-ES. We find that additional mirrored mutations improve the IPOP active CMA-ES statistically significantly, but by only a small margin, on several functions while never a statistically significant performance decline can be observed. Furthermore, experiments on different function instances with some algorithm parameters and stopping criteria changed reveal essentially the same results

    Black-box optimization benchmarking of IPOP-saACM-ES on the BBOB-2012 noisy testbed

    Get PDF
    In this paper, we study the performance of IPOP-saACM-ES, recently proposed self-adaptive surrogate-assisted Covariance Matrix Adaptation Evolution Strategy. The algorithm was tested using restarts till a total number of function evaluations of 106D10^6D was reached, where DD is the dimension of the function search space. The experiments show that the surrogate model control allows IPOP-saACM-ES to be as robust as the original IPOP-aCMA-ES and outperforms the latter by a factor from 2 to 3 on 6 benchmark problems with moderate noise. On 15 out of 30 benchmark problems in dimension 20, IPOP-saACM-ES exceeds the records observed during BBOB-2009 and BBOB-2010.Comment: Genetic and Evolutionary Computation Conference (GECCO 2012) (2012

    KL-based Control of the Learning Schedule for Surrogate Black-Box Optimization

    Get PDF
    This paper investigates the control of an ML component within the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) devoted to black-box optimization. The known CMA-ES weakness is its sample complexity, the number of evaluations of the objective function needed to approximate the global optimum. This weakness is commonly addressed through surrogate optimization, learning an estimate of the objective function a.k.a. surrogate model, and replacing most evaluations of the true objective function with the (inexpensive) evaluation of the surrogate model. This paper presents a principled control of the learning schedule (when to relearn the surrogate model), based on the Kullback-Leibler divergence of the current search distribution and the training distribution of the former surrogate model. The experimental validation of the proposed approach shows significant performance gains on a comprehensive set of ill-conditioned benchmark problems, compared to the best state of the art including the quasi-Newton high-precision BFGS method

    Benchmarking a Weighted Negative Covariance Matrix Update on the BBOB-2010 Noisy Testbed

    Get PDF
    In a companion paper, we presented a weighted negative update of the covariance matrix in the CMA-ES—weighted active CMA-ES or, in short, aCMA-ES. In this paper, we benchmark the IPOP-aCMA-ES on the BBOB-2010 noisy testbed in search space dimension between 2 and 40 and compare its performance with the IPOP-CMA-ES. The aCMA suffers from a moderate performance loss, of less than a factor of two, on the sphere function with two different noise models. On the other hand, the aCMA enjoys a (significant) performance gain, up to a factor of four, on 13 unimodal functions in various dimensions, in particular the larger ones. Compared to the best performance observed during BBOB-2009, the IPOP-aCMA-ES sets a new record on overall ten functions. The global picture is in favor of aCMA which might establish a new standard also for noisy problems

    A Computationally Efficient Limited Memory CMA-ES for Large Scale Optimization

    Full text link
    We propose a computationally efficient limited memory Covariance Matrix Adaptation Evolution Strategy for large scale optimization, which we call the LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems in continuous domain. Inspired by the limited memory BFGS method of Liu and Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a covariance matrix reproduced from mm direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the time and memory complexity of the sampling to O(mn)O(mn), where nn is the number of decision variables. When nn is large (e.g., nn > 1000), even relatively small values of mm (e.g., m=20,30m=20,30) are sufficient to efficiently solve fully non-separable problems and to reduce the overall run-time.Comment: Genetic and Evolutionary Computation Conference (GECCO'2014) (2014

    Comparing Mirrored Mutations and Active Covariance Matrix Adaptation in the IPOP-CMA-ES on the Noiseless BBOB Testbed

    Get PDF
    International audienceThis paper investigates two variants of the well-known Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Active covariance matrix adaptation allows for negative weights in the covariance matrix update rule such that "bad" steps are (actively) taken into account when updating the covariance matrix of the sample distribution. On the other hand, mirrored mutations via selective mirroring also take the "bad" steps into account. In this case, they are first evaluated when taken in the opposite direction (mirrored) and then considered for regular selection. In this study, we investigate the difference between the performance of the two variants empirically on the noiseless BBOB testbed. The CMA-ES with selectively mirrored mutations only outperforms the active CMA-ES on the sphere function while the active variant statistically significantly outperforms mirrored mutations on 10 of 24 functions in several dimensions
    corecore