8 research outputs found

    Hyperparameter Importance Across Datasets

    Full text link
    With the advent of automated machine learning, automated hyperparameter optimization methods are by now routinely used in data mining. However, this progress is not yet matched by equal progress on automatic analyses that yield information beyond performance-optimizing hyperparameter settings. In this work, we aim to answer the following two questions: Given an algorithm, what are generally its most important hyperparameters, and what are typically good values for these? We present methodology and a framework to answer these questions based on meta-learning across many datasets. We apply this methodology using the experimental meta-data available on OpenML to determine the most important hyperparameters of support vector machines, random forests and Adaboost, and to infer priors for all their hyperparameters. The results, obtained fully automatically, provide a quantitative basis to focus efforts in both manual algorithm design and in automated hyperparameter optimization. The conducted experiments confirm that the hyperparameters selected by the proposed method are indeed the most important ones and that the obtained priors also lead to statistically significant improvements in hyperparameter optimization.Comment: \c{opyright} 2018. Copyright is held by the owner/author(s). Publication rights licensed to ACM. This is the author's version of the work. It is posted here for your personal use, not for redistribution. The definitive Version of Record was published in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Minin

    An Extensive Evaluation of Portfolio Approaches for Constraint Satisfaction Problems

    Get PDF
    In the context of Constraint Programming, a portfolio approach exploits the complementary strengths of a portfolio of different constraint solvers. The goal is to predict and run the best solver(s) of the portfolio for solving a new, unseen problem. In this work we reproduce, simulate, and evaluate the performance of different portfolio approaches on extensive benchmarks of Constraint Satisfaction Problems. Empirical results clearly show the benefits of portfolio solvers in terms of both solved instances and solving time

    ASlib: A Benchmark Library for Algorithm Selection

    Full text link
    The task of algorithm selection involves choosing an algorithm from a set of algorithms on a per-instance basis in order to exploit the varying performance of algorithms over a set of instances. The algorithm selection problem is attracting increasing attention from researchers and practitioners in AI. Years of fruitful applications in a number of domains have resulted in a large amount of data, but the community lacks a standard format or repository for this data. This situation makes it difficult to share and compare different approaches effectively, as is done in other, more established fields. It also unnecessarily hinders new researchers who want to work in this area. To address this problem, we introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature. Our format has been designed to be able to express a wide variety of different scenarios. Demonstrating the breadth and power of our platform, we describe a set of example experiments that build and evaluate algorithm selection models through a common interface. The results display the potential of algorithm selection to achieve significant performance improvements across a broad range of problems and algorithms.Comment: Accepted to be published in Artificial Intelligence Journa

    An Extensive Evaluation of Portfolio Approaches for Constraint Satisfaction Problems

    Get PDF
    International audienceIn the context of Constraint Programming, a portfolio approach exploits the complementary strengths of a portfolio of different constraint solvers. The goal is to predict and run the best solver(s) of the portfolio for solving a new, unseen problem. In this work we reproduce, simulate, and evaluate the performance of different portfolio approaches on extensive benchmarks of Constraint Satisfaction Problems. Empirical results clearly show the benefits of portfolio solvers in terms of both solved instances and solving time

    Feature Selection for SUNNY: a Study on the Algorithm Selection Library

    Get PDF
    International audienceGiven a collection of algorithms, the Algorithm Selection (AS) problem consists in identifying which of them is the best one for solving a given problem. The selection depends on a set of numerical features that characterize the problem to solve. In this paper we show the impact of feature selection techniques on the performance of the SUNNY algorithm selector, taking as reference the benchmarks of the AS library (ASlib). Results indicate that a handful of features is enough to reach similar, if not better, performance of the original SUNNY approach that uses all the available features. We also present sunny-as: a tool for using SUNNY on a generic ASlib scenario

    Data-driven Metareasoning for Collaborative Autonomous Systems

    Get PDF
    When coordinating their actions to accomplish a mission, the agents in a multi-agent system may use a collaboration algorithm to determine which agent performs which task. This paper describes a novel data-driven metareasoning approach that generates a metareasoning policy that the agents can use whenever they must collaborate to assign tasks. This metareasoning approach collects data about the performance of the algorithms at many decision points and uses this data to train a set of surrogate models that can estimate the expected performance of different algorithms. This yields a metareasoning policy that, based on the current state of the system, estimated the algorithms’ expected performance and chose the best one. For a ship protection scenario, computational results show that one version of the metareasoning policy performed as well as the best component algorithm but required less computational effort. The proposed data-driven metareasoning approach could be a promising tool for developing policies to control multi-agent autonomous systems.This work was supported in part by the U.S. Naval Air Warfare Center-Aircraft Division

    Identifying Key Algorithm Parameters and Instance Features using Forward Selection

    No full text
    Abstract. Most state-of-the-art algorithms for large scale optimization expose free parameters, giving rise to combinatorial spaces of possible configurations. Typically, these spaces are hard for humans to understand. In this work, we study a model-based approach for identifying a small set of both algorithm parameters and instance features that suffices for predicting empirical algorithm performance well. Our empirical analyses on a wide variety of hard combinatorial problem benchmarks (spanning SAT, MIP, and TSP) show that—for parameter configurations sampled uniformly at random—very good performance predictions can typically be obtained based on just two key parameters, and that similarly, few instance features and algorithm parameters suffice to predict the most salient algorithm performance characteristics in the combined configuration/feature space. We also use these models to identify settings of these key parameters that are predicted to achieve the best overall performance, both on average across instances and in an instance-specific way. This serves as a further way of evaluating model quality and also provides a tool for further understanding the parameter space. We provide software for carrying out this analysis on arbitrary problem domains and hope that it will help algorithm developers gain insights into the key parameters of their algorithms, the key features of their instances, and their interactions.
    corecore