813 research outputs found
A Computationally Efficient Limited Memory CMA-ES for Large Scale Optimization
We propose a computationally efficient limited memory Covariance Matrix
Adaptation Evolution Strategy for large scale optimization, which we call the
LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for
numerical optimization of non-linear, non-convex optimization problems in
continuous domain. Inspired by the limited memory BFGS method of Liu and
Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a
covariance matrix reproduced from direction vectors selected during the
optimization process. The decomposition of the covariance matrix into Cholesky
factors allows to reduce the time and memory complexity of the sampling to
, where is the number of decision variables. When is large
(e.g., > 1000), even relatively small values of (e.g., ) are
sufficient to efficiently solve fully non-separable problems and to reduce the
overall run-time.Comment: Genetic and Evolutionary Computation Conference (GECCO'2014) (2014
Efficient Covariance Matrix Update for Variable Metric Evolution Strategies
International audienceRandomized direct search algorithms for continuous domains, such as Evolution Strategies, are basic tools in machine learning. They are especially needed when the gradient of an objective function (e.g., loss, energy, or reward function) cannot be computed or estimated efficiently. Application areas include supervised and reinforcement learning as well as model selection. These randomized search strategies often rely on normally distributed additive variations of candidate solutions. In order to efficiently search in non-separable and ill-conditioned landscapes the covariance matrix of the normal distribution must be adapted, amounting to a variable metric method. Consequently, Covariance Matrix Adaptation (CMA) is considered state-of-the-art in Evolution Strategies. In order to sample the normal distribution, the adapted covariance matrix needs to be decomposed, requiring in general operations, where is the search space dimension. We propose a new update mechanism which can replace a rank-one covariance matrix update and the computationally expensive decomposition of the covariance matrix. The newly developed update rule reduces the computational complexity of the rank-one covariance matrix adaptation to without resorting to outdated distributions. We derive new versions of the elitist Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and the multi-objective CMA-ES. These algorithms are equivalent to the original procedures except that the update step for the variable metric distribution scales better in the problem dimension. We also introduce a simplified variant of the non-elitist CMA-ES with the incremental covariance matrix update and investigate its performance. Apart from the reduced time-complexity of the distribution update, the algebraic computations involved in all new algorithms are simpler compared to the original versions. The new update rule improves the performance of the CMA-ES for large scale machine learning problems in which the objective function can be evaluated fast
06061 Abstracts Collection -- Theory of Evolutionary Algorithms
From 05.02.06 to 10.02.06, the Dagstuhl Seminar 06061 ``Theory of Evolutionary Algorithms\u27\u27 was held in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Non-elitist Evolutionary Multi-objective Optimizers Revisited
Since around 2000, it has been considered that elitist evolutionary
multi-objective optimization algorithms (EMOAs) always outperform non-elitist
EMOAs. This paper revisits the performance of non-elitist EMOAs for
bi-objective continuous optimization when using an unbounded external archive.
This paper examines the performance of EMOAs with two elitist and one
non-elitist environmental selections. The performance of EMOAs is evaluated on
the bi-objective BBOB problem suite provided by the COCO platform. In contrast
to conventional wisdom, results show that non-elitist EMOAs with particular
crossover methods perform significantly well on the bi-objective BBOB problems
with many decision variables when using the unbounded external archive. This
paper also analyzes the properties of the non-elitist selection.Comment: This is an accepted version of a paper published in the proceedings
of GECCO 201
Convergence Analysis of the Hessian Estimation Evolution Strategy
The class of algorithms called Hessian Estimation Evolution Strategies
(HE-ESs) update the covariance matrix of their sampling distribution by
directly estimating the curvature of the objective function. The approach is
practically efficient, as attested by respectable performance on the BBOB
testbed, even on rather irregular functions.
In this paper we formally prove two strong guarantees for the (1+4)-HE-ES, a
minimal elitist member of the family: stability of the covariance matrix
update, and as a consequence, linear convergence on all convex quadratic
problems at a rate that is independent of the problem instance
- …