17 research outputs found

    Maximum Likelihood-based Online Adaptation of Hyper-parameters in CMA-ES

    Get PDF
    The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is widely accepted as a robust derivative-free continuous optimization algorithm for non-linear and non-convex optimization problems. CMA-ES is well known to be almost parameterless, meaning that only one hyper-parameter, the population size, is proposed to be tuned by the user. In this paper, we propose a principled approach called self-CMA-ES to achieve the online adaptation of CMA-ES hyper-parameters in order to improve its overall performance. Experimental results show that for larger-than-default population size, the default settings of hyper-parameters of CMA-ES are far from being optimal, and that self-CMA-ES allows for dynamically approaching optimal settings.Comment: 13th International Conference on Parallel Problem Solving from Nature (PPSN 2014) (2014

    What can we learn from multi-objective meta-optimization of Evolutionary Algorithms in continuous domains?

    Get PDF
    Properly configuring Evolutionary Algorithms (EAs) is a challenging task made difficult by many different details that affect EAs' performance, such as the properties of the fitness function, time and computational constraints, and many others. EAs' meta-optimization methods, in which a metaheuristic is used to tune the parameters of another (lower-level) metaheuristic which optimizes a given target function, most often rely on the optimization of a single property of the lower-level method. In this paper, we show that by using a multi-objective genetic algorithm to tune an EA, it is possible not only to find good parameter sets considering more objectives at the same time but also to derive generalizable results which can provide guidelines for designing EA-based applications. In particular, we present a general framework for multi-objective meta-optimization, to show that "going multi-objective" allows one to generate configurations that, besides optimally fitting an EA to a given problem, also perform well on previously unseen ones

    Tuning optimization algorithms under multiple objective function evaluation budgets

    Get PDF
    Most sensitivity analysis studies of optimization algorithm control parameters are restricted to a single objective function evaluation (OFE) budget. This restriction is problematic because the optimality of control parameter values is dependent not only on the problem’s fitness landscape, but also on the OFE budget available to explore that landscape. Therefore the OFE budget needs to be taken into consideration when performing control parameter tuning. This article presents a new algorithm (tMOPSO) for tuning the control parameter values of stochastic optimization algorithms under a range of OFE budget constraints. Specifically, for a given problem tMOPSO aims to determine multiple groups of control parameter values, each of which results in optimal performance at a different OFE budget. To achieve this, the control parameter tuning problem is formulated as a multi-objective optimization problem. Additionally, tMOPSO uses a noise-handling strategy and control parameter value assessment procedure, which are specialized for tuning stochastic optimization algorithms. Conducted numerical experiments provide evidence that tMOPSO is effective at tuning under multiple OFE budget constraints.National Research Foundation (NRF) of South Africa.http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=4235hb201

    Parameter Tuning and Scientific Testing in Evolutionary Algorithms

    Get PDF
    Eiben, A.E. [Promotor

    Computational results for an automatically tuned CMA-ES with increasing population size on the CEC'05 benchmark set

    Get PDF
    Abstract In this article, we apply an automatic algorithm configuration tool to improve the performance of the CMA-ES algorithm with increasing population size (iCMA-ES), the best performing algorithm on the CEC'05 benchmark set for continuous function optimization. In particular, we consider a separation between tuning and test sets and, thus, tune iCMA-ES on a different set of functions than the ones of the CEC'05 benchmark set. Our experimental results show that the tuned iCMA-ES improves significantly over the default version of iCMA-ES. Furthermore, we provide some further analyses on the impact of the modified parameter settings on iCMA-ES performance and a comparison with recent results of algorithms that use CMA-ES as a subordinate local search

    Never Too Old To Learn: On-line Evolution of Controllers in Swarm- and Modular Robotics

    Get PDF
    Eiben, A.E. [Promotor

    MOTA : a many-objective tuning algorithm specialized for tuning undermultiple objective function evaluation budgets

    Get PDF
    Control parameter studies assist practitioners to select optimization algorithm parameter values which are appropriate for the problem at hand. Parameters values are well-suited to a problem if they result in a search which is effective given that problem’s objective function(s), constraints and termination criteria. Given these considerations a many objective tuning algorithm named MOTA is presented. MOTA is specialized for tuning a stochastic optimization algorithm according to multiple performance measures each over a range of objective function evaluation budgets. MOTA’s specialization consist of four aspects; 1) a tuning problem formulation which consists of both a speed objective and a speed decision variable, 2) a control parameter tuple assessment procedure which utilizes information from a single assessment run’s history to gauge that tuple’s performance at multiple evaluation budgets, 3) a preemptively terminating resampling strategy for handling the noise present when tuning stochastic algorithms, and 4) the use of bi-objective decomposition to assist in many objective optimization. MOTA combines these aspects together with DE operators to search for effective control parameter values. Numerical experiments which consisted of tuning NSGA-II and MOEA/D demonstrate that MOTA is effective at many objective tuning.The National Research Foundation (NRF) of South Africa.http://www.mitpressjournals.orgloi/evco2017-06-30hb2017Mechanical and Aeronautical Engineerin

    Reproducibility in evolutionary computation

    Get PDF
    Experimental studies are prevalent in Evolutionary Computation (EC), and concerns about the reproducibility and replicability of such studies have increased in recent times, reflecting similar concerns in other scientific fields. In this article, we discuss, within the context of EC, the different types of reproducibility and suggest a classification that refines the badge system of the Association of Computing Machinery (ACM) adopted by ACM Transactions on Evolutionary Learning and Optimization (TELO). We identify cultural and technical obstacles to reproducibility in the EC field. Finally, we provide guidelines and suggest tools that may help to overcome some of these reproducibility obstacles

    Efficient learning methods to tune algorithm parameters

    Get PDF
    This thesis focuses on the algorithm configuration problem. In particular, three efficient learning configurators are introduced to tune parameters offline. The first looks into metaoptimization, where the algorithm is expected to solve similar problem instances within varying computational budgets. Standard meta-optimization techniques have to be repeated whenever the available computational budget changes, as the parameters that work well for small budgets, may not be suitable for larger ones. The proposed Flexible Budget method can, in a single run, identify the best parameter setting for all possible computational budgets less than a specified maximum, without compromising solution quality. Hence, a lot of time is saved. This will be shown experimentally. The second regards Racing algorithms which often do not fully utilize the available computational budget to find the best parameter setting, as they may terminate whenever a single parameter remains in the race. The proposed Racing with reset can overcome this issue, and at the same time adapt Racing’s hyper-parameter α online. Experiments will show that such adaptation enables the algorithm to achieve significantly lower failure rates, compared to any fixed α set by the user. The third extends on Racing with reset by allowing it to utilize all the information gathered previously when it adapts α, it also permits Racing algorithms in general to intelligently allocate the budget in each iteration, as opposed to equally allocating it. All developed Racing algorithms are compared to two budget allocators from the Simulation Optimization literature, OCBA and CBA, and to equal allocation to demonstrate under which conditions each performs best in terms of minimizing the probability of incorrect selection
    corecore