520 research outputs found

    Efficient Covariance Matrix Update for Variable Metric Evolution Strategies

    Get PDF
    International audienceRandomized direct search algorithms for continuous domains, such as Evolution Strategies, are basic tools in machine learning. They are especially needed when the gradient of an objective function (e.g., loss, energy, or reward function) cannot be computed or estimated efficiently. Application areas include supervised and reinforcement learning as well as model selection. These randomized search strategies often rely on normally distributed additive variations of candidate solutions. In order to efficiently search in non-separable and ill-conditioned landscapes the covariance matrix of the normal distribution must be adapted, amounting to a variable metric method. Consequently, Covariance Matrix Adaptation (CMA) is considered state-of-the-art in Evolution Strategies. In order to sample the normal distribution, the adapted covariance matrix needs to be decomposed, requiring in general Θ(n3)\Theta(n^3) operations, where nn is the search space dimension. We propose a new update mechanism which can replace a rank-one covariance matrix update and the computationally expensive decomposition of the covariance matrix. The newly developed update rule reduces the computational complexity of the rank-one covariance matrix adaptation to Θ(n2)\Theta(n^2) without resorting to outdated distributions. We derive new versions of the elitist Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and the multi-objective CMA-ES. These algorithms are equivalent to the original procedures except that the update step for the variable metric distribution scales better in the problem dimension. We also introduce a simplified variant of the non-elitist CMA-ES with the incremental covariance matrix update and investigate its performance. Apart from the reduced time-complexity of the distribution update, the algebraic computations involved in all new algorithms are simpler compared to the original versions. The new update rule improves the performance of the CMA-ES for large scale machine learning problems in which the objective function can be evaluated fast

    A nature-inspired multi-objective optimisation strategy based on a new reduced space searching algorithm for the design of alloy steels

    Get PDF
    In this paper, a salient search and optimisation algorithm based on a new reduced space searching strategy, is presented. This algorithm originates from an idea which relates to a simple experience when humans search for an optimal solution to a ‘real-life’ problem, i.e. when humans search for a candidate solution given a certain objective, a large area tends to be scanned first; should one succeed in finding clues in relation to the predefined objective, then the search space is greatly reduced for a more detailed search. Furthermore, this new algorithm is extended to the multi-objective optimisation case. Simulation results of optimising some challenging benchmark problems suggest that both the proposed single objective and multi-objective optimisation algorithms outperform some of the other well-known Evolutionary Algorithms (EAs). The proposed algorithms are further applied successfully to the optimal design problem of alloy steels, which aims at determining the optimal heat treatment regime and the required weight percentages for chemical composites to obtain the desired mechanical properties of steel hence minimising production costs and achieving the overarching aim of ‘right-first-time production’ of metals

    A novel population-based multi-objective CMA-ES and the impact of different constraint handling techniques

    Get PDF
    htmlabstractThe Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) is a well-known, state-of-the-art optimization algorithm for single-objective real-valued problems, especially in black-box settings. Although several extensions of CMA-ES to multi-objective (MO) optimization exist, no extension incorporates a key component of the most robust and general CMA-ES variant: the association of a population with each Gaussian distribution that drives optimization. To achieve this, we use a recently introduced framework for extending population-based algorithms from single- to multi-objective optimization. We compare, using six well-known benchmark problems, the performance of the newly constructed MO-CMA-ES with existing variants and with the estimation of distribution algorithm (EDA) known as iMAMaLGaM, that is also an instance of the framework, extending the single-objective EDA iAMaLGaM to MO. Results underline the advantages of being able to use populations. Because many real-world problems have constraints, we also study the use of four constraint-handling techniques. We find that CMA-ES is typically less robust to these techniques than iAMaLGaM. Moreover, whereas we could verify that a penalty method that was previously used in literature leads to fast convergence, we also find that it has a high risk of finding only nearly, but not entirely, feasible solutions. We therefore propose that other constraint-handling techniques should be preferred in general

    Hybrid biogeography-based evolutionary algorithms

    Get PDF
    Hybrid evolutionary algorithms (EAs) are effective optimization methods that combine multiple EAs. We propose several hybrid EAs by combining some recently-developed EAs with a biogeography-based hybridization strategy. We test our hybrid EAs on the continuous optimization benchmarks from the 2013 Congress on Evolutionary Computation (CEC) and on some real-world traveling salesman problems. The new hybrid EAs include two approaches to hybridization: (1) iteration-level hybridization, in which various EAs and BBO are executed in sequence; and (2) algorithm-level hybridization, which runs various EAs independently and then exchanges information between them using ideas from biogeography. Our empirical study shows that the new hybrid EAs significantly outperforms their constituent algorithms with the selected tuning parameters and generation limits, and algorithm-level hybridization is generally better than iteration-level hybridization. Results also show that the best new hybrid algorithm in this paper is competitive with the algorithms from the 2013 CEC competition. In addition, we show that the new hybrid EAs are generally robust to tuning parameters. In summary, the contribution of this paper is the introduction of biogeography-based hybridization strategies to the EA community

    06061 Abstracts Collection -- Theory of Evolutionary Algorithms

    Get PDF
    From 05.02.06 to 10.02.06, the Dagstuhl Seminar 06061 ``Theory of Evolutionary Algorithms\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available
    • 

    corecore