57,565 research outputs found

    Alternative Restart Strategies for CMA-ES

    Get PDF
    This paper focuses on the restart strategy of CMA-ES on multi-modal functions. A first alternative strategy proceeds by decreasing the initial step-size of the mutation while doubling the population size at each restart. A second strategy adaptively allocates the computational budget among the restart settings in the BIPOP scheme. Both restart strategies are validated on the BBOB benchmark; their generality is also demonstrated on an independent real-world problem suite related to spacecraft trajectory optimization

    Maximum Likelihood-based Online Adaptation of Hyper-parameters in CMA-ES

    Get PDF
    The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is widely accepted as a robust derivative-free continuous optimization algorithm for non-linear and non-convex optimization problems. CMA-ES is well known to be almost parameterless, meaning that only one hyper-parameter, the population size, is proposed to be tuned by the user. In this paper, we propose a principled approach called self-CMA-ES to achieve the online adaptation of CMA-ES hyper-parameters in order to improve its overall performance. Experimental results show that for larger-than-default population size, the default settings of hyper-parameters of CMA-ES are far from being optimal, and that self-CMA-ES allows for dynamically approaching optimal settings.Comment: 13th International Conference on Parallel Problem Solving from Nature (PPSN 2014) (2014

    A Computationally Efficient Limited Memory CMA-ES for Large Scale Optimization

    Full text link
    We propose a computationally efficient limited memory Covariance Matrix Adaptation Evolution Strategy for large scale optimization, which we call the LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems in continuous domain. Inspired by the limited memory BFGS method of Liu and Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a covariance matrix reproduced from mm direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the time and memory complexity of the sampling to O(mn)O(mn), where nn is the number of decision variables. When nn is large (e.g., nn > 1000), even relatively small values of mm (e.g., m=20,30m=20,30) are sufficient to efficiently solve fully non-separable problems and to reduce the overall run-time.Comment: Genetic and Evolutionary Computation Conference (GECCO'2014) (2014

    A separable approximation dynamic programming algorithm for economic dispatch with transmission losses

    Get PDF
    Copyright @ 2002 University of Belgrade - This article can be accessed from the link below.The standard way to solve the static economic dispatch problem with transmission losses is the penalty factor method. The problem is solved iteratively by a Lagrange multiplier method or by dynamic programming, using values obtained at one iteration to compute penalty factors for the next until stability is attained. A new iterative method is proposed for the case where transmission losses are represented by a quadratic formula (i.e., by the traditional B-coefficients). A separable approximation is made at each iteration, which is much closer to the initial problem than the penalty factor approximation. Consequently, lower cost solutions may be obtained in some cases, and convergence is faster

    Experimental Comparisons of Derivative Free Optimization Algorithms

    Get PDF
    In this paper, the performances of the quasi-Newton BFGS algorithm, the NEWUOA derivative free optimizer, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), the Differential Evolution (DE) algorithm and Particle Swarm Optimizers (PSO) are compared experimentally on benchmark functions reflecting important challenges encountered in real-world optimization problems. Dependence of the performances in the conditioning of the problem and rotational invariance of the algorithms are in particular investigated.Comment: 8th International Symposium on Experimental Algorithms, Dortmund : Germany (2009

    REPRESENTATIVE DEMOCRACY: MAKING IT WORK BETTER

    Get PDF
    Public Economics,

    Particle parameter analyzing system

    Get PDF
    An X-Y plotter circuit apparatus is described which displays an input pulse representing particle parameter information, that would ordinarily appear on the screen of an oscilloscope as a rectangular pulse, as a single dot positioned on the screen where the upper right hand corner of the input pulse would have appeared. If another event occurs, and it is desired to display this event, the apparatus is provided to replace the dot with a short horizontal line

    Towards a Theory-Guided Benchmarking Suite for Discrete Black-Box Optimization Heuristics: Profiling (1+λ)(1+\lambda) EA Variants on OneMax and LeadingOnes

    Full text link
    Theoretical and empirical research on evolutionary computation methods complement each other by providing two fundamentally different approaches towards a better understanding of black-box optimization heuristics. In discrete optimization, both streams developed rather independently of each other, but we observe today an increasing interest in reconciling these two sub-branches. In continuous optimization, the COCO (COmparing Continuous Optimisers) benchmarking suite has established itself as an important platform that theoreticians and practitioners use to exchange research ideas and questions. No widely accepted equivalent exists in the research domain of discrete black-box optimization. Marking an important step towards filling this gap, we adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics. In this documentation we demonstrate how this test bed can be used to profile the performance of evolutionary algorithms. More concretely, we study the optimization behavior of several (1+λ)(1+\lambda) EA variants on the two benchmark problems OneMax and LeadingOnes. This comparison motivates a refined analysis for the optimization time of the (1+λ)(1+\lambda) EA on LeadingOnes
    corecore