56 research outputs found

    Alternative Restart Strategies for CMA-ES

    Get PDF
    This paper focuses on the restart strategy of CMA-ES on multi-modal functions. A first alternative strategy proceeds by decreasing the initial step-size of the mutation while doubling the population size at each restart. A second strategy adaptively allocates the computational budget among the restart settings in the BIPOP scheme. Both restart strategies are validated on the BBOB benchmark; their generality is also demonstrated on an independent real-world problem suite related to spacecraft trajectory optimization

    Maximum Likelihood-based Online Adaptation of Hyper-parameters in CMA-ES

    Get PDF
    The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is widely accepted as a robust derivative-free continuous optimization algorithm for non-linear and non-convex optimization problems. CMA-ES is well known to be almost parameterless, meaning that only one hyper-parameter, the population size, is proposed to be tuned by the user. In this paper, we propose a principled approach called self-CMA-ES to achieve the online adaptation of CMA-ES hyper-parameters in order to improve its overall performance. Experimental results show that for larger-than-default population size, the default settings of hyper-parameters of CMA-ES are far from being optimal, and that self-CMA-ES allows for dynamically approaching optimal settings.Comment: 13th International Conference on Parallel Problem Solving from Nature (PPSN 2014) (2014

    Benchmarking a Weighted Negative Covariance Matrix Update on the BBOB-2010 Noisy Testbed

    Get PDF
    In a companion paper, we presented a weighted negative update of the covariance matrix in the CMA-ES—weighted active CMA-ES or, in short, aCMA-ES. In this paper, we benchmark the IPOP-aCMA-ES on the BBOB-2010 noisy testbed in search space dimension between 2 and 40 and compare its performance with the IPOP-CMA-ES. The aCMA suffers from a moderate performance loss, of less than a factor of two, on the sphere function with two different noise models. On the other hand, the aCMA enjoys a (significant) performance gain, up to a factor of four, on 13 unimodal functions in various dimensions, in particular the larger ones. Compared to the best performance observed during BBOB-2009, the IPOP-aCMA-ES sets a new record on overall ten functions. The global picture is in favor of aCMA which might establish a new standard also for noisy problems

    Study of the Fractal decomposition based metaheuristic on low-dimensional Black-Box optimization problems

    Full text link
    This paper analyzes the performance of the Fractal Decomposition Algorithm (FDA) metaheuristic applied to low-dimensional continuous optimization problems. This algorithm was originally developed specifically to deal efficiently with high-dimensional continuous optimization problems by building a fractal-based search tree with a branching factor linearly proportional to the number of dimensions. Here, we aim to answer the question of whether FDA could be equally effective for low-dimensional problems. For this purpose, we evaluate the performance of FDA on the Black Box Optimization Benchmark (BBOB) for dimensions 2, 3, 5, 10, 20, and 40. The experimental results show that overall the FDA in its current form does not perform well enough. Among different function groups, FDA shows its best performance on Misc. moderate and Weak structure functions

    Black-Box Data-efficient Policy Search for Robotics

    Get PDF
    The most data-efficient algorithms for reinforcement learning (RL) in robotics are based on uncertain dynamical models: after each episode, they first learn a dynamical model of the robot, then they use an optimization algorithm to find a policy that maximizes the expected return given the model and its uncertainties. It is often believed that this optimization can be tractable only if analytical, gradient-based algorithms are used; however, these algorithms require using specific families of reward functions and policies, which greatly limits the flexibility of the overall approach. In this paper, we introduce a novel model-based RL algorithm, called Black-DROPS (Black-box Data-efficient RObot Policy Search) that: (1) does not impose any constraint on the reward function or the policy (they are treated as black-boxes), (2) is as data-efficient as the state-of-the-art algorithm for data-efficient RL in robotics, and (3) is as fast (or faster) than analytical approaches when several cores are available. The key idea is to replace the gradient-based optimization algorithm with a parallel, black-box algorithm that takes into account the model uncertainties. We demonstrate the performance of our new algorithm on two standard control benchmark problems (in simulation) and a low-cost robotic manipulator (with a real robot).Comment: Accepted at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017; Code at http://github.com/resibots/blackdrops; Video at http://youtu.be/kTEyYiIFGP

    Black-box Optimization Benchmarking of NIPOP-aCMA-ES and NBIPOP-aCMA-ES on the BBOB-2012 Noiseless Testbed

    Get PDF
    International audienceIn this paper, we study the performance of NIPOP-aCMA-ES and NBIPOP-aCMA-ES, recently proposed alternative restart strategies for CMA-ES. Both algorithms were tested using restarts till a total number of function evaluations of 106D10^6D was reached, where DD is the dimension of the function search space. We compared new strategies to CMA-ES with IPOP and BIPOP restart schemes, two algorithms with one of the best overall performance observed during the BBOB-2009 and BBOB-2010. We also present the first benchmarking of BIPOP-CMA-ES with the weighted active covariance matrix update (BIPOP-aCMA-ES). The comparison shows that NIPOP-aCMA-ES usually outperforms IPOP-aCMA-ES and has similar performance with BIPOP-aCMA-ES, using only the regime of increasing the population size. The second strategy, NBIPOP-aCMA-ES, outperforms BIPOP-aCMA-ES in dimension 40 on weakly structured multi-modal functions thanks to the adaptive allocation of computation budgets between the regimes of restarts

    A Computationally Efficient Limited Memory CMA-ES for Large Scale Optimization

    Full text link
    We propose a computationally efficient limited memory Covariance Matrix Adaptation Evolution Strategy for large scale optimization, which we call the LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems in continuous domain. Inspired by the limited memory BFGS method of Liu and Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a covariance matrix reproduced from mm direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the time and memory complexity of the sampling to O(mn)O(mn), where nn is the number of decision variables. When nn is large (e.g., nn > 1000), even relatively small values of mm (e.g., m=20,30m=20,30) are sufficient to efficiently solve fully non-separable problems and to reduce the overall run-time.Comment: Genetic and Evolutionary Computation Conference (GECCO'2014) (2014

    Mirrored Variants of the (1,4)-CMA-ES Compared on the Noiseless BBOB-2010 Testbed

    Get PDF
    International audienceDerandomization by means of mirrored samples has been recently introduced to enhance the performances of (1,λ)(1,\lambda)-Evolution-Strategies (ESs) with the aim of designing fast and robust stochastic local search algorithms. This paper compares on the BBOB-2010 noiseless benchmark testbed two variants of the (1,4)-CMA-ES where the mirrored samples are used. Independent restarts are conducted up to a total budget of 104D10^{4} D function evaluations, where DD is the dimension of the search space. The results show that the improved variants are significantly faster than the baseline (1,4)-CMA-ES on 4 functions in 20D (respectively 7 when using sequential selection in addition) by a factor of up to 3 (on the attractive sector function). In no case, the (1,4)-CMA-ES is significantly faster on any tested target function value in 5D and 20D. Moreover, the algorithm employing both mirroring and sequential selection is significantly better than the algorithm without sequentialism on five functions in 20D with expected running times that are about 20% smaller

    CMA-ES with Restarts for Solving CEC 2013 Benchmark Problems

    Get PDF
    This paper investigates the performance of 6 versions of Covariance Matrix Adaptation Evolution Strategy (CMAES) with restarts on a set of 28 noiseless optimization problems (including 23 multi-modal ones) designed for the special session on real-parameter optimization of CEC 2013. The experimental validation of the restart strategies shows that: i). the versions of CMA-ES with weighted active covariance matrix update outperform the original versions of CMA-ES, especially on illconditioned problems; ii). the original restart strategies with increasing population size (IPOP) are usually outperformed by the bi-population restart strategies where the initial mutation stepsize is also varied; iii). the recently proposed alternative restart strategies for CMA-ES demonstrate a competitive performance and are ranked first w.r.t. the proportion of function-target pairs solved after the full run on all 10-, 30- and 50-dimensional problems
    • …
    corecore