17,966 research outputs found
Robust Covariance Adaptation in Adaptive Importance Sampling
Importance sampling (IS) is a Monte Carlo methodology that allows for
approximation of a target distribution using weighted samples generated from
another proposal distribution. Adaptive importance sampling (AIS) implements an
iterative version of IS which adapts the parameters of the proposal
distribution in order to improve estimation of the target. While the adaptation
of the location (mean) of the proposals has been largely studied, an important
challenge of AIS relates to the difficulty of adapting the scale parameter
(covariance matrix). In the case of weight degeneracy, adapting the covariance
matrix using the empirical covariance results in a singular matrix, which leads
to poor performance in subsequent iterations of the algorithm. In this paper,
we propose a novel scheme which exploits recent advances in the IS literature
to prevent the so-called weight degeneracy. The method efficiently adapts the
covariance matrix of a population of proposal distributions and achieves a
significant performance improvement in high-dimensional scenarios. We validate
the new method through computer simulations
The CMA Evolution Strategy: A Tutorial
This tutorial introduces the CMA Evolution Strategy (ES), where CMA stands
for Covariance Matrix Adaptation. The CMA-ES is a stochastic, or randomized,
method for real-parameter (continuous domain) optimization of non-linear,
non-convex functions. We try to motivate and derive the algorithm from
intuitive concepts and from requirements of non-linear, non-convex search in
continuous domain.Comment: ArXiv e-prints, arXiv:1604.xxxx
A Computationally Efficient Limited Memory CMA-ES for Large Scale Optimization
We propose a computationally efficient limited memory Covariance Matrix
Adaptation Evolution Strategy for large scale optimization, which we call the
LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for
numerical optimization of non-linear, non-convex optimization problems in
continuous domain. Inspired by the limited memory BFGS method of Liu and
Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a
covariance matrix reproduced from direction vectors selected during the
optimization process. The decomposition of the covariance matrix into Cholesky
factors allows to reduce the time and memory complexity of the sampling to
, where is the number of decision variables. When is large
(e.g., > 1000), even relatively small values of (e.g., ) are
sufficient to efficiently solve fully non-separable problems and to reduce the
overall run-time.Comment: Genetic and Evolutionary Computation Conference (GECCO'2014) (2014
Adaptive Ranking Based Constraint Handling for Explicitly Constrained Black-Box Optimization
A novel explicit constraint handling technique for the covariance matrix
adaptation evolution strategy (CMA-ES) is proposed. The proposed constraint
handling exhibits two invariance properties. One is the invariance to arbitrary
element-wise increasing transformation of the objective and constraint
functions. The other is the invariance to arbitrary affine transformation of
the search space. The proposed technique virtually transforms a constrained
optimization problem into an unconstrained optimization problem by considering
an adaptive weighted sum of the ranking of the objective function values and
the ranking of the constraint violations that are measured by the Mahalanobis
distance between each candidate solution to its projection onto the boundary of
the constraints. Simulation results are presented and show that the CMA-ES with
the proposed constraint handling exhibits the affine invariance and performs
similarly to the CMA-ES on unconstrained counterparts.Comment: 9 page
Linear Convergence of Comparison-based Step-size Adaptive Randomized Search via Stability of Markov Chains
In this paper, we consider comparison-based adaptive stochastic algorithms
for solving numerical optimisation problems. We consider a specific subclass of
algorithms that we call comparison-based step-size adaptive randomized search
(CB-SARS), where the state variables at a given iteration are a vector of the
search space and a positive parameter, the step-size, typically controlling the
overall standard deviation of the underlying search distribution.We investigate
the linear convergence of CB-SARS on\emph{scaling-invariant} objective
functions. Scaling-invariantfunctions preserve the ordering of points with
respect to their functionvalue when the points are scaled with the same
positive parameter (thescaling is done w.r.t. a fixed reference point). This
class offunctions includes norms composed with strictly increasing functions
aswell as many non quasi-convex and non-continuousfunctions. On
scaling-invariant functions, we show the existence of ahomogeneous Markov
chain, as a consequence of natural invarianceproperties of CB-SARS (essentially
scale-invariance and invariance tostrictly increasing transformation of the
objective function). We thenderive sufficient conditions for \emph{global
linear convergence} ofCB-SARS, expressed in terms of different stability
conditions of thenormalised homogeneous Markov chain (irreducibility,
positivity, Harrisrecurrence, geometric ergodicity) and thus define a general
methodologyfor proving global linear convergence of CB-SARS algorithms
onscaling-invariant functions. As a by-product we provide aconnexion between
comparison-based adaptive stochasticalgorithms and Markov chain Monte Carlo
algorithms.Comment: SIAM Journal on Optimization, Society for Industrial and Applied
Mathematics, 201
Maximum Likelihood-based Online Adaptation of Hyper-parameters in CMA-ES
The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is widely
accepted as a robust derivative-free continuous optimization algorithm for
non-linear and non-convex optimization problems. CMA-ES is well known to be
almost parameterless, meaning that only one hyper-parameter, the population
size, is proposed to be tuned by the user. In this paper, we propose a
principled approach called self-CMA-ES to achieve the online adaptation of
CMA-ES hyper-parameters in order to improve its overall performance.
Experimental results show that for larger-than-default population size, the
default settings of hyper-parameters of CMA-ES are far from being optimal, and
that self-CMA-ES allows for dynamically approaching optimal settings.Comment: 13th International Conference on Parallel Problem Solving from Nature
(PPSN 2014) (2014
- …