Leveraging Benchmarking Data for Informed One-Shot Dynamic Algorithm Selection

Abstract

A key challenge in the application of evolutionary algorithms in practice is the selection of an algorithm instance that best suits the problem at hand. What complicates this decision further is that different algorithms may be best suited for different stages of the optimization process. Dynamic algorithm selection and configuration are therefore well-researched topics in evolutionary computation. However, while hyper-heuristics and parameter control studies typically assume a setting in which the algorithm needs to be chosen while running the algorithms, without prior information, AutoML approaches such as hyper-parameter tuning and automated algorithm configuration assume the possibility of evaluating different configurations before making a final recommendation. In practice, however, we are often in a middle-ground between these two settings, where we need to decide on the algorithm instance before the run ("oneshot" setting), but where we have (possibly lots of) data available on which we can base an informed decision. We analyze in this work how such prior performance data can be used to infer informed dynamic algorithm selection schemes for the solution of pseudo-Boolean optimization problems. Our specific use-case considers a family of genetic algorithms.Comment: Submitted for review to GECCO'2

    Similar works