1,293 research outputs found
The CMA Evolution Strategy: A Tutorial
This tutorial introduces the CMA Evolution Strategy (ES), where CMA stands
for Covariance Matrix Adaptation. The CMA-ES is a stochastic, or randomized,
method for real-parameter (continuous domain) optimization of non-linear,
non-convex functions. We try to motivate and derive the algorithm from
intuitive concepts and from requirements of non-linear, non-convex search in
continuous domain.Comment: ArXiv e-prints, arXiv:1604.xxxx
Recommended from our members
Intelligent joint channel parameter estimation techniques for mobile wireless positioning applications
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Mobile wireless positioning has recently received great attention. For mobile wireless
communication networks, an inherently suitable approach is to obtain the parameters
that are used for positioning estimates from the radio signal measurements between a
mobile device and one or more xed base stations. However, obtaining accurate estimates of these location-dependent channel parameters is a challenging task. The focus of this thesis is on the estimation of these channel parameters for mobile wireless positioning
applications. In particular, we investigate novel estimators that jointly estimate
more than one type of channel parameters. We rst perform a comprehensive critical
review on the most recent and popular joint channel parameter estimation techniques.
Secondly, we improve a state-of-the-art technique, namely the Space Alternating Generalised Expectation maximisation (SAGE) algorithm by employing adaptive interference
cancellation to improve the estimation accuracy of weaker paths. Thirdly, a novel intelligent channel parameter estimation technique using Evolution Strategy (ES) is proposed to overcome the drawbacks of the existing iterative maximum likelihood methods. Furthermore, given that in reality it is di cult to obtain the number of multipath in advance, we propose a two tier Hierarchically Organised ES to jointly estimate the number of multipath as well as the channel parameters. Finally, we extend the proposed ES method to further estimate the Doppler shift in mobile environments. Our proposed intelligent joint channel estimation techniques are shown to exhibit excellent performance even with low Signal to Noise Ratio (SNR) channel conditions as well as robust against uncertainties in initialisations.EPSRC and Cambridge Silicon Radi
Efficient Covariance Matrix Update for Variable Metric Evolution Strategies
International audienceRandomized direct search algorithms for continuous domains, such as Evolution Strategies, are basic tools in machine learning. They are especially needed when the gradient of an objective function (e.g., loss, energy, or reward function) cannot be computed or estimated efficiently. Application areas include supervised and reinforcement learning as well as model selection. These randomized search strategies often rely on normally distributed additive variations of candidate solutions. In order to efficiently search in non-separable and ill-conditioned landscapes the covariance matrix of the normal distribution must be adapted, amounting to a variable metric method. Consequently, Covariance Matrix Adaptation (CMA) is considered state-of-the-art in Evolution Strategies. In order to sample the normal distribution, the adapted covariance matrix needs to be decomposed, requiring in general operations, where is the search space dimension. We propose a new update mechanism which can replace a rank-one covariance matrix update and the computationally expensive decomposition of the covariance matrix. The newly developed update rule reduces the computational complexity of the rank-one covariance matrix adaptation to without resorting to outdated distributions. We derive new versions of the elitist Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and the multi-objective CMA-ES. These algorithms are equivalent to the original procedures except that the update step for the variable metric distribution scales better in the problem dimension. We also introduce a simplified variant of the non-elitist CMA-ES with the incremental covariance matrix update and investigate its performance. Apart from the reduced time-complexity of the distribution update, the algebraic computations involved in all new algorithms are simpler compared to the original versions. The new update rule improves the performance of the CMA-ES for large scale machine learning problems in which the objective function can be evaluated fast
The SOS Platform: Designing, Tuning and Statistically Benchmarking Optimisation Algorithms
open access articleWe present Stochastic Optimisation Software (SOS), a Java platform facilitating the algorithmic design process and the evaluation of metaheuristic optimisation algorithms. SOS reduces the burden of coding miscellaneous methods for dealing with several bothersome and time-demanding tasks such as parameter tuning, implementation of comparison algorithms and testbed problems, collecting and processing data to display results, measuring algorithmic overhead, etc. SOS provides numerous off-the-shelf methods including: (1) customised implementations of statistical tests, such as the Wilcoxon rank-sum test and the Holm–Bonferroni procedure, for comparing the performances of optimisation algorithms and automatically generating result tables in PDF and formats; (2) the implementation of an original advanced statistical routine for accurately comparing couples of stochastic optimisation algorithms; (3) the implementation of a novel testbed suite for continuous optimisation, derived from the IEEE CEC 2014 benchmark, allowing for controlled activation of the rotation on each testbed function. Moreover, we briefly comment on the current state of the literature in stochastic optimisation and highlight similarities shared by modern metaheuristics inspired by nature. We argue that the vast majority of these algorithms are simply a reformulation of the same methods and that metaheuristics for optimisation should be simply treated as stochastic processes with less emphasis on the inspiring metaphor behind them
Optimum Tracking with Evolution Strategies
Evolutionary algorithms are frequently applied to dynamic optimization problems in which the objective varies with time. It is desirable to gain an improved understanding of the influence of different genetic operators and of the parameters of a strategy on its tracking performance. An approach that has proven useful in the past is to mathematically analyze the strategy's behavior in simple, idealized environments. The present paper investigates the performance of a multiparent evolution strategy that employs cumulative step length adaptation for an optimization task in which the target moves linearly with uniform speed. Scaling laws that quite accurately describe the behavior of the strategy and that greatly contribute to its understanding are derived. It is shown that in contrast to previously obtained results for a randomly moving target, cumulative step length adaptation fails to achieve optimal step lengths if the target moves in a linear fashion. Implications for the choice of population size parameters are discussed
- …