20 research outputs found

    A Multicore Tool for Constraint Solving

    Get PDF
    *** To appear in IJCAI 2015 proceedings *** In Constraint Programming (CP), a portfolio solver uses a variety of different solvers for solving a given Constraint Satisfaction / Optimization Problem. In this paper we introduce sunny-cp2: the first parallel CP portfolio solver that enables a dynamic, cooperative, and simultaneous execution of its solvers in a multicore setting. It incorporates state-of-the-art solvers, providing also a usable and configurable framework. Empirical results are very promising. sunny-cp2 can even outperform the performance of the oracle solver which always selects the best solver of the portfolio for a given problem

    AutoFolio: An Automatically Configured Algorithm Selector (Extended Abstract)

    Get PDF
    Article in monograph or in proceedingsLeiden Inst Advanced Computer Science

    AutoFolio: An Automatically Configured Algorithm Selector (Extended Abstract)

    Get PDF
    Article in monograph or in proceedingsLeiden Inst Advanced Computer Science

    Warmstarting of Model-based Algorithm Configuration

    Full text link
    The performance of many hard combinatorial problem solvers depends strongly on their parameter settings, and since manual parameter tuning is both tedious and suboptimal the AI community has recently developed several algorithm configuration (AC) methods to automatically address this problem. While all existing AC methods start the configuration process of an algorithm A from scratch for each new type of benchmark instances, here we propose to exploit information about A's performance on previous benchmarks in order to warmstart its configuration on new types of benchmarks. We introduce two complementary ways in which we can exploit this information to warmstart AC methods based on a predictive model. Experiments for optimizing a very flexible modern SAT solver on twelve different instance sets show that our methods often yield substantial speedups over existing AC methods (up to 165-fold) and can also find substantially better configurations given the same compute budget.Comment: Preprint of AAAI'18 pape

    An Extensive Evaluation of Portfolio Approaches for Constraint Satisfaction Problems

    Get PDF
    In the context of Constraint Programming, a portfolio approach exploits the complementary strengths of a portfolio of different constraint solvers. The goal is to predict and run the best solver(s) of the portfolio for solving a new, unseen problem. In this work we reproduce, simulate, and evaluate the performance of different portfolio approaches on extensive benchmarks of Constraint Satisfaction Problems. Empirical results clearly show the benefits of portfolio solvers in terms of both solved instances and solving time

    AutoFolio: An Automatically Configured Algorithm Selector (Extended Abstract)

    Get PDF
    Algorithms and the Foundations of Software technolog

    Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates

    Get PDF
    The optimization of algorithm (hyper-)parameters is crucial for achieving peak performance across a wide range of domains, ranging from deep neural networks to solvers for hard combinatorial problems. The resulting algorithm configuration (AC) problem has attracted much attention from the machine learning community. However, the proper evaluation of new AC procedures is hindered by two key hurdles. First, AC benchmarks are hard to set up. Second and even more significantly, they are computationally expensive: a single run of an AC procedure involves many costly runs of the target algorithm whose performance is to be optimized in a given AC benchmark scenario. One common workaround is to optimize cheap-to-evaluate artificial benchmark functions (e.g., Branin) instead of actual algorithms; however, these have different properties than realistic AC problems. Here, we propose an alternative benchmarking approach that is similarly cheap to evaluate but much closer to the original AC problem: replacing expensive benchmarks by surrogate benchmarks constructed from AC benchmarks. These surrogate benchmarks approximate the response surface corresponding to true target algorithm performance using a regression model, and the original and surrogate benchmark share the same (hyper-)parameter space. In our experiments, we construct and evaluate surrogate benchmarks for hyperparameter optimization as well as for AC problems that involve performance optimization of solvers for hard combinatorial problems, drawing training data from the runs of existing AC procedures. We show that our surrogate benchmarks capture overall important characteristics of the AC scenarios, such as high- and low-performing regions, from which they were derived, while being much easier to use and orders of magnitude cheaper to evaluate
    corecore