108 research outputs found
Alternative Restart Strategies for CMA-ES
This paper focuses on the restart strategy of CMA-ES on multi-modal
functions. A first alternative strategy proceeds by decreasing the initial
step-size of the mutation while doubling the population size at each restart. A
second strategy adaptively allocates the computational budget among the restart
settings in the BIPOP scheme. Both restart strategies are validated on the BBOB
benchmark; their generality is also demonstrated on an independent real-world
problem suite related to spacecraft trajectory optimization
Maximum Likelihood-based Online Adaptation of Hyper-parameters in CMA-ES
The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is widely
accepted as a robust derivative-free continuous optimization algorithm for
non-linear and non-convex optimization problems. CMA-ES is well known to be
almost parameterless, meaning that only one hyper-parameter, the population
size, is proposed to be tuned by the user. In this paper, we propose a
principled approach called self-CMA-ES to achieve the online adaptation of
CMA-ES hyper-parameters in order to improve its overall performance.
Experimental results show that for larger-than-default population size, the
default settings of hyper-parameters of CMA-ES are far from being optimal, and
that self-CMA-ES allows for dynamically approaching optimal settings.Comment: 13th International Conference on Parallel Problem Solving from Nature
(PPSN 2014) (2014
Study of the Fractal decomposition based metaheuristic on low-dimensional Black-Box optimization problems
This paper analyzes the performance of the Fractal Decomposition Algorithm
(FDA) metaheuristic applied to low-dimensional continuous optimization
problems. This algorithm was originally developed specifically to deal
efficiently with high-dimensional continuous optimization problems by building
a fractal-based search tree with a branching factor linearly proportional to
the number of dimensions. Here, we aim to answer the question of whether FDA
could be equally effective for low-dimensional problems. For this purpose, we
evaluate the performance of FDA on the Black Box Optimization Benchmark (BBOB)
for dimensions 2, 3, 5, 10, 20, and 40. The experimental results show that
overall the FDA in its current form does not perform well enough. Among
different function groups, FDA shows its best performance on Misc. moderate and
Weak structure functions
On the Impact of Active Covariance Matrix Adaptation in the CMA-ES With Mirrored Mutations and Small Initial Population Size on the Noiseless BBOB Testbed
International audienceMirrored mutations as well as active covariance matrix adaptation are two techniques that have been introduced into the well-known CMA-ES algorithm for numerical optimization. Here, we investigate the impact of active covariance matrix adaptation in the IPOP-CMA-ES with mirrored mutation and a small initial population size. Active covariance matrix adaptation improves the performance on 8 of the 24 benchmark functions of the noiseless BBOB test bed. The effect is the largest on the ill-conditioned functions with the largest improvement on the discus function where the expected runtime is more than halved. On the other hand, no statistically significant adverse effects can be observed
On the Effect of Mirroring in the IPOP Active CMA-ES on the Noiseless BBOB Testbed
International audienceMirrored mutations and active covariance matrix adaptation are two recent ideas to improve the well-known covariance matrix adaptation evolution strategy (CMA-ES)---a state-of-the-art algorithm for numerical optimization. It turns out that both mechanisms can be implemented simultaneously. In this paper, we investigate the impact of mirrored mutations on the so-called IPOP active CMA-ES. We find that additional mirrored mutations improve the IPOP active CMA-ES statistically significantly, but by only a small margin, on several functions while never a statistically significant performance decline can be observed. Furthermore, experiments on different function instances with some algorithm parameters and stopping criteria changed reveal essentially the same results
Benchmarking a Weighted Negative Covariance Matrix Update on the BBOB-2010 Noisy Testbed
In a companion paper, we presented a weighted negative update of the covariance matrix in the CMA-ESâweighted active CMA-ES or, in short, aCMA-ES. In this paper, we benchmark the IPOP-aCMA-ES on the BBOB-2010 noisy testbed in search space dimension between 2 and 40 and compare its performance with the IPOP-CMA-ES. The aCMA suffers from a moderate performance loss, of less than a factor of two, on the sphere function with two different noise models. On the other hand, the aCMA enjoys a (significant) performance gain, up to a factor of four, on 13 unimodal functions in various dimensions, in particular the larger ones. Compared to the best performance observed during BBOB-2009, the IPOP-aCMA-ES sets a new record on overall ten functions. The global picture is in favor of aCMA which might establish a new standard also for noisy problems
On the Impact of a Small Initial Population Size in the IPOP Active CMA-ES with Mirrored Mutations on the Noiseless BBOB Testbed
International audienceActive Covariance Matrix Adaptation and Mirrored Mutations have been independently proposed as improved variants of the well-known optimization algorithm Covariance Matrix Adaptation Evolution Strategy (CMA-ES) for numerical optimization. This paper investigates the impact of the algorithm's population size when both active covariance matrix adaptation and mirrored mutation are used in the CMA-ES. To this end, we compare the CMA-ES with standard population size , i.e., with a version with half this population size where is the problem dimension
Online Selection of CMA-ES Variants
In the field of evolutionary computation, one of the most challenging topics
is algorithm selection. Knowing which heuristics to use for which optimization
problem is key to obtaining high-quality solutions. We aim to extend this
research topic by taking a first step towards a selection method for adaptive
CMA-ES algorithms. We build upon the theoretical work done by van Rijn
\textit{et al.} [PPSN'18], in which the potential of switching between
different CMA-ES variants was quantified in the context of a modular CMA-ES
framework.
We demonstrate in this work that their proposed approach is not very
reliable, in that implementing the suggested adaptive configurations does not
yield the predicted performance gains. We propose a revised approach, which
results in a more robust fit between predicted and actual performance. The
adaptive CMA-ES approach obtains performance gains on 18 out of 24 tested
functions of the BBOB benchmark, with stable advantages of up to 23\%. An
analysis of module activation indicates which modules are most crucial for the
different phases of optimizing each of the 24 benchmark problems. The module
activation also suggests that additional gains are possible when including the
(B)IPOP modules, which we have excluded for this present work.Comment: To appear at Genetic and Evolutionary Computation Conference
(GECCO'19) Appendix will be added in due tim
- âŠ