3,106 research outputs found

    Efficient Covariance Matrix Update for Variable Metric Evolution Strategies

    Get PDF
    International audienceRandomized direct search algorithms for continuous domains, such as Evolution Strategies, are basic tools in machine learning. They are especially needed when the gradient of an objective function (e.g., loss, energy, or reward function) cannot be computed or estimated efficiently. Application areas include supervised and reinforcement learning as well as model selection. These randomized search strategies often rely on normally distributed additive variations of candidate solutions. In order to efficiently search in non-separable and ill-conditioned landscapes the covariance matrix of the normal distribution must be adapted, amounting to a variable metric method. Consequently, Covariance Matrix Adaptation (CMA) is considered state-of-the-art in Evolution Strategies. In order to sample the normal distribution, the adapted covariance matrix needs to be decomposed, requiring in general Θ(n3)\Theta(n^3) operations, where nn is the search space dimension. We propose a new update mechanism which can replace a rank-one covariance matrix update and the computationally expensive decomposition of the covariance matrix. The newly developed update rule reduces the computational complexity of the rank-one covariance matrix adaptation to Θ(n2)\Theta(n^2) without resorting to outdated distributions. We derive new versions of the elitist Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and the multi-objective CMA-ES. These algorithms are equivalent to the original procedures except that the update step for the variable metric distribution scales better in the problem dimension. We also introduce a simplified variant of the non-elitist CMA-ES with the incremental covariance matrix update and investigate its performance. Apart from the reduced time-complexity of the distribution update, the algebraic computations involved in all new algorithms are simpler compared to the original versions. The new update rule improves the performance of the CMA-ES for large scale machine learning problems in which the objective function can be evaluated fast

    Empirical Evaluation of Contextual Policy Search with a Comparison-based Surrogate Model and Active Covariance Matrix Adaptation

    Full text link
    Contextual policy search (CPS) is a class of multi-task reinforcement learning algorithms that is particularly useful for robotic applications. A recent state-of-the-art method is Contextual Covariance Matrix Adaptation Evolution Strategies (C-CMA-ES). It is based on the standard black-box optimization algorithm CMA-ES. There are two useful extensions of CMA-ES that we will transfer to C-CMA-ES and evaluate empirically: ACM-ES, which uses a comparison-based surrogate model, and aCMA-ES, which uses an active update of the covariance matrix. We will show that improvements with these methods can be impressive in terms of sample-efficiency, although this is not relevant any more for the robotic domain.Comment: Supplementary material for poster paper accepted at GECCO 2019; https://doi.org/10.1145/3319619.332193

    Contents

    Get PDF
    We discuss the current minimisation strategies adopted by research projects involving the determination of parton distribution functions (PDFs) and fragmentation functions (FFs) through the training of neural networks. We present a short overview of a proton PDF determination obtained using the covariance matrix adaptation evolution strategy (CMA-ES) optimisation algorithm. We perform comparisons between the CMA-ES and the standard nodal genetic algorithm (NGA) adopted by the NNPDF collaboration

    Covariance Matrix Adaptation for the Rapid Illumination of Behavior Space

    Full text link
    We focus on the challenge of finding a diverse collection of quality solutions on complex continuous domains. While quality diver-sity (QD) algorithms like Novelty Search with Local Competition (NSLC) and MAP-Elites are designed to generate a diverse range of solutions, these algorithms require a large number of evaluations for exploration of continuous spaces. Meanwhile, variants of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are among the best-performing derivative-free optimizers in single-objective continuous domains. This paper proposes a new QD algorithm called Covariance Matrix Adaptation MAP-Elites (CMA-ME). Our new algorithm combines the self-adaptation techniques of CMA-ES with archiving and mapping techniques for maintaining diversity in QD. Results from experiments based on standard continuous optimization benchmarks show that CMA-ME finds better-quality solutions than MAP-Elites; similarly, results on the strategic game Hearthstone show that CMA-ME finds both a higher overall quality and broader diversity of strategies than both CMA-ES and MAP-Elites. Overall, CMA-ME more than doubles the performance of MAP-Elites using standard QD performance metrics. These results suggest that QD algorithms augmented by operators from state-of-the-art optimization algorithms can yield high-performing methods for simultaneously exploring and optimizing continuous search spaces, with significant applications to design, testing, and reinforcement learning among other domains.Comment: Accepted to GECCO 202

    CMA-ES with Restarts for Solving CEC 2013 Benchmark Problems

    Get PDF
    This paper investigates the performance of 6 versions of Covariance Matrix Adaptation Evolution Strategy (CMAES) with restarts on a set of 28 noiseless optimization problems (including 23 multi-modal ones) designed for the special session on real-parameter optimization of CEC 2013. The experimental validation of the restart strategies shows that: i). the versions of CMA-ES with weighted active covariance matrix update outperform the original versions of CMA-ES, especially on illconditioned problems; ii). the original restart strategies with increasing population size (IPOP) are usually outperformed by the bi-population restart strategies where the initial mutation stepsize is also varied; iii). the recently proposed alternative restart strategies for CMA-ES demonstrate a competitive performance and are ranked first w.r.t. the proportion of function-target pairs solved after the full run on all 10-, 30- and 50-dimensional problems

    Local search and restart strategies for satisfiability solving in fuzzy logics

    Get PDF
    Satisfiability solving in fuzzy logics is a subject that has not been researched much, certainly compared to satisfiability in propositional logics. Yet, fuzzy logics are a powerful tool for modelling complex problems. Recently, we proposed an optimization approach to solving satisfiability in fuzzy logics and compared the standard Covariance Matrix Adaptation Evolution Strategy algorithm (CMA-ES) with an analytical solver on a set of benchmark problems. Especially on more finegrained problems did CMA-ES compare favourably to the analytical approach. In this paper, we evaluate two types of hillclimber in addition to CMA-ES, as well as restart strategies for these algorithms. Our results show that a population-based hillclimber outperforms CMA-ES on the harder problem class

    A Computationally Efficient Limited Memory CMA-ES for Large Scale Optimization

    Full text link
    We propose a computationally efficient limited memory Covariance Matrix Adaptation Evolution Strategy for large scale optimization, which we call the LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems in continuous domain. Inspired by the limited memory BFGS method of Liu and Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a covariance matrix reproduced from mm direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the time and memory complexity of the sampling to O(mn)O(mn), where nn is the number of decision variables. When nn is large (e.g., nn > 1000), even relatively small values of mm (e.g., m=20,30m=20,30) are sufficient to efficiently solve fully non-separable problems and to reduce the overall run-time.Comment: Genetic and Evolutionary Computation Conference (GECCO'2014) (2014
    • 

    corecore