49 research outputs found

    On the impact of the cutoff time on the performance of algorithm configurators

    Get PDF
    Algorithm conigurators are automated methods to optimise the parameters of an algorithm for a class of problems. We evaluate the performance of a simple random local search conigurator (Param- RLS) for tuning the neighbourhood size k of the RLS k algorithm. We measure performance as the expected number of coniguration evaluations required to identify the optimal value for the parameter. We analyse the impact of the cutof time κ (the time spent evaluat- ing a coniguration for a problem instance) on the expected number of coniguration evaluations required to ind the optimal parameter value, where we compare conigurations using either best found itness values (ParamRLS-F) or optimisation times (ParamRLS-T). We consider tuning RLS k for a variant of the Ridge function class ( Ridge* ), where the performance of each parameter value does not change during the run, and for the OneMax function class, where longer runs favour smaller k . We rigorously prove that ParamRLS- F eiciently tunes RLS k for Ridge* for any κ while ParamRLS-T requires at least quadratic κ . For OneMax ParamRLS-F identiies k = 1 as optimal with linear κ while ParamRLS-T requires a κ of at least Ω ( n log n ) . For smaller κ ParamRLS-F identiies that k > 1 performs better while ParamRLS-T returns k chosen uniformly at random

    A Framework for the Runtime Analysis of Algorithm Configurators

    Get PDF
    Despite the widespread usage of algorithm configurators to tune algorithmic parameters, there is still little theoretical understanding of their performance. In this thesis, we build a theoretical foundation for the field of algorithm configuration to enable the derivation of specific statements regarding the performance of algorithm configurators. We use the devised framework to prove tight bounds on the time required by specific configurators to identify the optimal parameter values of randomised local search and simple evolutionary algorithms for standard benchmark function classes. Our framework allows us to derive insights regarding the impact of the parameters of algorithm configurators, in particular the cutoff time and performance metric used to compare configurations, as well as to characterise parameter landscapes. In the general case, we present necessary lower bounds and sufficient upper bounds on the cutoff time if the time taken to reach a specific target fitness value is used as the performance metric. For specific simple algorithm configuration scenarios, we show that our general lower bounds are tight and that the same optimal parameter values can be identified using smaller cutoff times if the performance metric is instead taken to be the fitness value obtained within the available time budget, which also reduces the required amount of problem-specific information. Our insights enable the design of mutation operators that are provably asymptotically faster for unimodal and approximately unimodal parameter landscapes and slower by only a logarithmic factor in the worst case. In addition to our contributions to the theory of algorithm configuration, the mathematical techniques derived in this thesis represent a substantial improvement over the state-of-the-art in the field of fixed-budget analysis

    The Configurable SAT Solver Challenge (CSSC)

    Get PDF
    It is well known that different solution strategies work well for different types of instances of hard combinatorial problems. As a consequence, most solvers for the propositional satisfiability problem (SAT) expose parameters that allow them to be customized to a particular family of instances. In the international SAT competition series, these parameters are ignored: solvers are run using a single default parameter setting (supplied by the authors) for all benchmark instances in a given track. While this competition format rewards solvers with robust default settings, it does not reflect the situation faced by a practitioner who only cares about performance on one particular application and can invest some time into tuning solver parameters for this application. The new Configurable SAT Solver Competition (CSSC) compares solvers in this latter setting, scoring each solver by the performance it achieved after a fully automated configuration step. This article describes the CSSC in more detail, and reports the results obtained in its two instantiations so far, CSSC 2013 and 2014

    MO-ParamILS: A Multi-objective Automatic Algorithm Configuration Framework

    Get PDF
    International audienceAutomated algorithm configuration procedures play an increasingly important role in the development and application of algorithms for a wide range of computationally challenging problems. Until very recently, these configuration procedures were limited to optimising a single performance objective, such as the running time or solution quality achieved by the algorithm being configured. However, in many applications there is more than one performance objective of interest. This gives rise to the multi-objective automatic algorithm configuration problem, which involves finding a Pareto set of configurations of a given target algorithm that characterises trade-offs between multiple performance objectives. In this work, we introduce MO-ParamILS, a multi-objective extension of the state-of-the-art single-objective algorithm configuration framework ParamILS, and demonstrate that it produces good results on several challenging bi-objective algorithm configuration scenarios compared to a base-line obtained from using a state-of-the-art single-objective algorithm configurator

    Fast perturbative algorithm configurators

    Get PDF
    Recent work has shown that the ParamRLS and ParamILS algorithm configurators can tune some simple randomised search heuristics for standard benchmark functions in linear expected time in the size of the parameter space. In this paper we prove a linear lower bound on the expected time to optimise any parameter tuning problem for ParamRLS, ParamILS as well as for larger classes of algorithm configurators. We propose a harmonic mutation operator for perturbative algorithm configurators that provably tunes single-parameter algorithms in polylogarithmic time for unimodal and approximately unimodal (i.e., non-smooth, rugged with an underlying gradient towards the optimum) parameter spaces. It is suitable as a general-purpose operator since even on worst-case (e.g., deceptive) landscapes it is only by at most a logarithmic factor slower than the default ones used by ParamRLS and ParamILS. An experimental analysis confirms the superiority of the approach in practice for a number of configuration scenarios, including ones involving more than one parameter

    Statistical Comparison of Algorithm Performance Through Instance Selection

    Get PDF
    Empirical performance evaluations, in competitions and scientific publications, play a major role in improving the state of the art in solving many automated reasoning problems, including SAT, CSP and Bayesian network structure learning (BNSL). To empirically demonstrate the merit of a new solver usually requires extensive experiments, with computational costs of CPU years. This not only makes it difficult for researchers with limited access to computational resources to test their ideas and publish their work, but also consumes large amounts of energy. We propose an approach for comparing the performance of two algorithms: by performing runs on carefully chosen instances, we obtain a probabilistic statement on which algorithm performs best, trading off between the computational cost of running algorithms and the confidence in the result. We describe a set of methods for this purpose and evaluate their efficacy on diverse datasets from SAT, CSP and BNSL. On all these datasets, most of our approaches were able to choose the correct algorithm with about 95% accuracy, while using less than a third of the CPU time required for a full comparison; the best methods reach this level of accuracy within less than 15% of the CPU time for a full comparison
    corecore