8,452 research outputs found
A Simple Modification in CMA-ES Achieving Linear Time and Space Complexity
International audienceThis paper proposes a simple modification of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) for high dimensional objective functions, reducing the internal time and space complexity from quadratic to linear. The covariance matrix is constrained to be diagonal and the resulting algorithm, sep-CMA-ES, samples each coordinate independently. Because the model complexity is reduced, the learning rate for the covariance matrix can be increased. Consequently, on essentially separable functions, sep-CMA-ES significantly outperforms CMA-ES. For dimensions larger than a hundred, even on the non-separable Rosenbrock function, the sep-CMA-ES needs fewer function evaluations than CMA-ES
A Computationally Efficient Limited Memory CMA-ES for Large Scale Optimization
We propose a computationally efficient limited memory Covariance Matrix
Adaptation Evolution Strategy for large scale optimization, which we call the
LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for
numerical optimization of non-linear, non-convex optimization problems in
continuous domain. Inspired by the limited memory BFGS method of Liu and
Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a
covariance matrix reproduced from direction vectors selected during the
optimization process. The decomposition of the covariance matrix into Cholesky
factors allows to reduce the time and memory complexity of the sampling to
, where is the number of decision variables. When is large
(e.g., > 1000), even relatively small values of (e.g., ) are
sufficient to efficiently solve fully non-separable problems and to reduce the
overall run-time.Comment: Genetic and Evolutionary Computation Conference (GECCO'2014) (2014
An Asynchronous Implementation of the Limited Memory CMA-ES
We present our asynchronous implementation of the LM-CMA-ES algorithm, which
is a modern evolution strategy for solving complex large-scale continuous
optimization problems. Our implementation brings the best results when the
number of cores is relatively high and the computational complexity of the
fitness function is also high. The experiments with benchmark functions show
that it is able to overcome its origin on the Sphere function, reaches certain
thresholds faster on the Rosenbrock and Ellipsoid function, and surprisingly
performs much better than the original version on the Rastrigin function.Comment: 9 pages, 4 figures, 4 tables; this is a full version of a paper which
has been accepted as a poster to IEEE ICMLA conference 201
SamACO: variable sampling ant colony optimization algorithm for continuous optimization
An ant colony optimization (ACO) algorithm offers
algorithmic techniques for optimization by simulating the foraging behavior of a group of ants to perform incremental solution
constructions and to realize a pheromone laying-and-following
mechanism. Although ACO is first designed for solving discrete
(combinatorial) optimization problems, the ACO procedure is
also applicable to continuous optimization. This paper presents
a new way of extending ACO to solving continuous optimization
problems by focusing on continuous variable sampling as a key
to transforming ACO from discrete optimization to continuous
optimization. The proposed SamACO algorithm consists of three
major steps, i.e., the generation of candidate variable values for
selection, the ants’ solution construction, and the pheromone
update process. The distinct characteristics of SamACO are the
cooperation of a novel sampling method for discretizing the
continuous search space and an efficient incremental solution
construction method based on the sampled values. The performance
of SamACO is tested using continuous numerical functions
with unimodal and multimodal features. Compared with some
state-of-the-art algorithms, including traditional ant-based algorithms
and representative computational intelligence algorithms
for continuous optimization, the performance of SamACO is seen
competitive and promising
Maximum Likelihood-based Online Adaptation of Hyper-parameters in CMA-ES
The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is widely
accepted as a robust derivative-free continuous optimization algorithm for
non-linear and non-convex optimization problems. CMA-ES is well known to be
almost parameterless, meaning that only one hyper-parameter, the population
size, is proposed to be tuned by the user. In this paper, we propose a
principled approach called self-CMA-ES to achieve the online adaptation of
CMA-ES hyper-parameters in order to improve its overall performance.
Experimental results show that for larger-than-default population size, the
default settings of hyper-parameters of CMA-ES are far from being optimal, and
that self-CMA-ES allows for dynamically approaching optimal settings.Comment: 13th International Conference on Parallel Problem Solving from Nature
(PPSN 2014) (2014
KL-based Control of the Learning Schedule for Surrogate Black-Box Optimization
This paper investigates the control of an ML component within the Covariance
Matrix Adaptation Evolution Strategy (CMA-ES) devoted to black-box
optimization. The known CMA-ES weakness is its sample complexity, the number of
evaluations of the objective function needed to approximate the global optimum.
This weakness is commonly addressed through surrogate optimization, learning an
estimate of the objective function a.k.a. surrogate model, and replacing most
evaluations of the true objective function with the (inexpensive) evaluation of
the surrogate model. This paper presents a principled control of the learning
schedule (when to relearn the surrogate model), based on the Kullback-Leibler
divergence of the current search distribution and the training distribution of
the former surrogate model. The experimental validation of the proposed
approach shows significant performance gains on a comprehensive set of
ill-conditioned benchmark problems, compared to the best state of the art
including the quasi-Newton high-precision BFGS method
- …