31 research outputs found

    Benchmarking and analyzing iterative optimization heuristics with IOHprofiler

    Get PDF
    Algorithms and the Foundations of Software technolog

    IOHanalyzer: Performance Analysis for Iterative Optimization Heuristic

    Full text link
    Benchmarking and performance analysis play an important role in understanding the behaviour of iterative optimization heuristics (IOHs) such as local search algorithms, genetic and evolutionary algorithms, Bayesian optimization algorithms, etc. This task, however, involves manual setup, execution, and analysis of the experiment on an individual basis, which is laborious and can be mitigated by a generic and well-designed platform. For this purpose, we propose IOHanalyzer, a new user-friendly tool for the analysis, comparison, and visualization of performance data of IOHs. Implemented in R and C++, IOHanalyzer is fully open source. It is available on CRAN and GitHub. IOHanalyzer provides detailed statistics about fixed-target running times and about fixed-budget performance of the benchmarked algorithms on real-valued, single-objective optimization tasks. Performance aggregation over several benchmark problems is possible, for example in the form of empirical cumulative distribution functions. Key advantages of IOHanalyzer over other performance analysis packages are its highly interactive design, which allows users to specify the performance measures, ranges, and granularity that are most useful for their experiments, and the possibility to analyze not only performance traces, but also the evolution of dynamic state parameters. IOHanalyzer can directly process performance data from the main benchmarking platforms, including the COCO platform, Nevergrad, and our own IOHexperimenter. An R programming interface is provided for users preferring to have a finer control over the implemented functionalities

    Leveraging Benchmarking Data for Informed One-Shot Dynamic Algorithm Selection

    Get PDF
    A key challenge in the application of evolutionary algorithms in practice is the selection of an algorithm instance that best suits the problem at hand. What complicates this decision further is that different algorithms may be best suited for different stages of the optimization process. Dynamic algorithm selection and configuration are therefore well-researched topics in evolutionary computation. However, while hyper-heuristics and parameter control studies typically assume a setting in which the algorithm needs to be chosen while running the algorithms, without prior information, AutoML approaches such as hyper-parameter tuning and automated algorithm configuration assume the possibility of evaluating different configurations before making a final recommendation. In practice, however, we are often in a middle-ground between these two settings, where we need to decide on the algorithm instance before the run ("oneshot" setting), but where we have (possibly lots of) data available on which we can base an informed decision. We analyze in this work how such prior performance data can be used to infer informed dynamic algorithm selection schemes for the solution of pseudo-Boolean optimization problems. Our specific use-case considers a family of genetic algorithms.Comment: Submitted for review to GECCO'2

    Benchmarking a (μ+λ)(\mu+\lambda) Genetic Algorithm with Configurable Crossover Probability

    Get PDF
    We investigate a family of (μ+λ)(\mu+\lambda) Genetic Algorithms (GAs) which creates offspring either from mutation or by recombining two randomly chosen parents. By scaling the crossover probability, we can thus interpolate from a fully mutation-only algorithm towards a fully crossover-based GA. We analyze, by empirical means, how the performance depends on the interplay of population size and the crossover probability. Our comparison on 25 pseudo-Boolean optimization problems reveals an advantage of crossover-based configurations on several easy optimization tasks, whereas the picture for more complex optimization problems is rather mixed. Moreover, we observe that the ``fast'' mutation scheme with its are power-law distributed mutation strengths outperforms standard bit mutation on complex optimization tasks when it is combined with crossover, but performs worse in the absence of crossover. We then take a closer look at the surprisingly good performance of the crossover-based (μ+λ)(\mu+\lambda) GAs on the well-known LeadingOnes benchmark problem. We observe that the optimal crossover probability increases with increasing population size μ\mu. At the same time, it decreases with increasing problem dimension, indicating that the advantages of the crossover are not visible in the asymptotic view classically applied in runtime analysis. We therefore argue that a mathematical investigation for fixed dimensions might help us observe effects which are not visible when focusing exclusively on asymptotic performance bounds
    corecore