12 research outputs found

    Explainable Benchmarking for Iterative Optimization Heuristics

    Full text link
    Benchmarking heuristic algorithms is vital to understand under which conditions and on what kind of problems certain algorithms perform well. In most current research into heuristic optimization algorithms, only a very limited number of scenarios, algorithm configurations and hyper-parameter settings are explored, leading to incomplete and often biased insights and results. This paper presents a novel approach we call explainable benchmarking. Introducing the IOH-Xplainer software framework, for analyzing and understanding the performance of various optimization algorithms and the impact of their different components and hyper-parameters. We showcase the framework in the context of two modular optimization frameworks. Through this framework, we examine the impact of different algorithmic components and configurations, offering insights into their performance across diverse scenarios. We provide a systematic method for evaluating and interpreting the behaviour and efficiency of iterative optimization heuristics in a more transparent and comprehensible manner, allowing for better benchmarking and algorithm design.Comment: Submitted to ACM TEL

    Testing the impact of parameter tuning on a variant of IPOP-CMA-ES with a bounded maximum population size on the noiseless BBOB testbed

    Full text link

    COCO: Performance Assessment

    Full text link
    We present an any-time performance assessment for benchmarking numerical optimization algorithms in a black-box scenario, applied within the COCO benchmarking platform. The performance assessment is based on runtimes measured in number of objective function evaluations to reach one or several quality indicator target values. We argue that runtime is the only available measure with a generic, meaningful, and quantitative interpretation. We discuss the choice of the target values, runlength-based targets, and the aggregation of results by using simulated restarts, averages, and empirical distribution functions

    Towards Dynamic Algorithm Selection for Numerical Black-Box Optimization: Investigating BBOB as a Use Case

    Get PDF
    One of the most challenging problems in evolutionary computation is to select from its family of diverse solvers one that performs well on a given problem. This algorithm selection problem is complicated by the fact that different phases of the optimization process require different search behavior. While this can partly be controlled by the algorithm itself, there exist large differences between algorithm performance. It can therefore be beneficial to swap the configuration or even the entire algorithm during the run. Long deemed impractical, recent advances in Machine Learning and in exploratory landscape analysis give hope that this dynamic algorithm configuration~(dynAC) can eventually be solved by automatically trained configuration schedules. With this work we aim at promoting research on dynAC, by introducing a simpler variant that focuses only on switching between different algorithms, not configurations. Using the rich data from the Black Box Optimization Benchmark~(BBOB) platform, we show that even single-switch dynamic Algorithm selection (dynAS) can potentially result in significant performance gains. We also discuss key challenges in dynAS, and argue that the BBOB-framework can become a useful tool in overcoming these

    From Understanding Genetic Drift to a Smart-Restart Mechanism for Estimation-of-Distribution Algorithms

    Full text link
    Estimation-of-distribution algorithms (EDAs) are optimization algorithms that learn a distribution on the search space from which good solutions can be sampled easily. A key parameter of most EDAs is the sample size (population size). If the population size is too small, the update of the probabilistic model builds on few samples, leading to the undesired effect of genetic drift. Too large population sizes avoid genetic drift, but slow down the process. Building on a recent quantitative analysis of how the population size leads to genetic drift, we design a smart-restart mechanism for EDAs. By stopping runs when the risk for genetic drift is high, it automatically runs the EDA in good parameter regimes. Via a mathematical runtime analysis, we prove a general performance guarantee for this smart-restart scheme. This in particular shows that in many situations where the optimal (problem-specific) parameter values are known, the restart scheme automatically finds these, leading to the asymptotically optimal performance. We also conduct an extensive experimental analysis. On four classic benchmark problems, we clearly observe the critical influence of the population size on the performance, and we find that the smart-restart scheme leads to a performance close to the one obtainable with optimal parameter values. Our results also show that previous theory-based suggestions for the optimal population size can be far from the optimal ones, leading to a performance clearly inferior to the one obtained via the smart-restart scheme. We also conduct experiments with PBIL (cross-entropy algorithm) on two combinatorial optimization problems from the literature, the max-cut problem and the bipartition problem. Again, we observe that the smart-restart mechanism finds much better values for the population size than those suggested in the literature, leading to a much better performance.Comment: Accepted for publication in "Journal of Machine Learning Research". Extended version of our GECCO 2020 paper. This article supersedes arXiv:2004.0714

    A model of anytime algorithm performance for bi-objective optimization

    Get PDF
    International audienceAnytime algorithms allow a practitioner to trade-off runtime for solution quality. This is of particular interest in multi-objective combinatorial optimization since it can be infeasible to identify all efficient solutions in a reasonable amount of time. We present a theoretical model that, under some mild assumptions, characterizes the “optimal” trade-off between runtime and solution quality, measured in terms of relative hypervolume, of anytime algorithms for bi-objective optimization. In particular, we assume that efficient solutions are collected sequentially such that the collected solution at each iteration maximizes the hypervolume indicator, and that the non-dominated set can be well approximated by a quadrant of a superellipse. We validate our model against an “optimal” model that has complete knowledge of the non-dominated set. The empirical results suggest that our theoretical model approximates the behavior of this optimal model quite well. We also analyze the anytime behavior of an ε-constraint algorithm, and show that our model can be used to guide the algorithm and improve its anytime behavior

    関数最適化問題に対する適応型差分進化法の研究

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学准教授 福永 アレックス, 東京大学教授 池上 高志, 東京大学教授 植田 一博, 東京大学教授 山口 泰, 東京大学教授 伊庭 斉志University of Tokyo(東京大学

    Globally convergent evolution strategies with application to Earth imaging problem in geophysics

    Get PDF
    Au cours des dernières années, s’est développé un intérêt tout particulier pour l’optimisation sans dérivée. Ce domaine de recherche se divise en deux catégories: une déterministe et l’autre stochastique. Bien qu’il s’agisse du même domaine, peu de liens ont déjà été établis entre ces deux branches. Cette thèse a pour objectif de combler cette lacune, en montrant comment les techniques issues de l’optimisation déterministe peuvent améliorer la performance des stratégies évolutionnaires, qui font partie des meilleures méthodes en optimisation stochastique. Sous certaines hypothèses, les modifications réalisées assurent une forme de convergence globale, c’est-à-dire une convergence vers un point stationnaire de premier ordre indépendamment du point de départ choisi. On propose ensuite d’adapter notre algorithme afin qu’il puisse traiter des problèmes avec des contraintes générales. On montrera également comment améliorer les performances numériques des stratégies évolutionnaires en incorporant un pas de recherche au début de chaque itération, dans laquelle on construira alors un modèle quadratique utilisant les points où la fonction coût a déjà été évaluée. Grâce aux récents progrès techniques dans le domaine du calcul parallèle, et à la nature parallélisable des stratégies évolutionnaires, on propose d’appliquer notre algorithme pour résoudre un problème inverse d’imagerie sismique. Les résultats obtenus ont permis d’améliorer la résolution de ce problème. ABSTRACT : In recent years, there has been significant and growing interest in Derivative-Free Optimization (DFO). This field can be divided into two categories: deterministic and stochastic. Despite addressing the same problem domain, only few interactions between the two DFO categories were established in the existing literature. In this thesis, we attempt to bridge this gap by showing how ideas from deterministic DFO can improve the efficiency and the rigorousness of one of the most successful class of stochastic algorithms, known as Evolution Strategies (ES’s). We propose to equip a class of ES’s with known techniques from deterministic DFO. The modified ES’s achieve rigorously a form of global convergence under reasonable assumptions. By global convergence, we mean convergence to first-order stationary points independently of the starting point. The modified ES’s are extended to handle general constrained optimization problems. Furthermore, we show how to significantly improve the numerical performance of ES’s by incorporating a search step at the beginning of each iteration. In this step, we build a quadratic model using the points where the objective function has been previously evaluated. Motivated by the recent growth of high performance computing resources and the parallel nature of ES’s, an application of our modified ES’s to Earth imaging Geophysics problem is proposed. The obtained results provide a great improvement for the problem resolution
    corecore