10 research outputs found
A Multi-Engine Approach to Answer Set Programming
Answer Set Programming (ASP) is a truly-declarative programming paradigm
proposed in the area of non-monotonic reasoning and logic programming, that has
been recently employed in many applications. The development of efficient ASP
systems is, thus, crucial. Having in mind the task of improving the solving
methods for ASP, there are two usual ways to reach this goal: extending
state-of-the-art techniques and ASP solvers, or designing a new ASP
solver from scratch. An alternative to these trends is to build on top of
state-of-the-art solvers, and to apply machine learning techniques for choosing
automatically the "best" available solver on a per-instance basis.
In this paper we pursue this latter direction. We first define a set of
cheap-to-compute syntactic features that characterize several aspects of ASP
programs. Then, we apply classification methods that, given the features of the
instances in a {\sl training} set and the solvers' performance on these
instances, inductively learn algorithm selection strategies to be applied to a
{\sl test} set. We report the results of a number of experiments considering
solvers and different training and test sets of instances taken from the ones
submitted to the "System Track" of the 3rd ASP Competition. Our analysis shows
that, by applying machine learning techniques to ASP solving, it is possible to
obtain very robust performance: our approach can solve more instances compared
with any solver that entered the 3rd ASP Competition. (To appear in Theory and
Practice of Logic Programming (TPLP).)Comment: 26 pages, 8 figure
AutoFolio: An Automatically Configured Algorithm Selector (Extended Abstract)
Article in monograph or in proceedingsLeiden Inst Advanced Computer Science
AutoFolio: An Automatically Configured Algorithm Selector (Extended Abstract)
Article in monograph or in proceedingsLeiden Inst Advanced Computer Science
Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates
The optimization of algorithm (hyper-)parameters is crucial for achieving
peak performance across a wide range of domains, ranging from deep neural
networks to solvers for hard combinatorial problems. The resulting algorithm
configuration (AC) problem has attracted much attention from the machine
learning community. However, the proper evaluation of new AC procedures is
hindered by two key hurdles. First, AC benchmarks are hard to set up. Second
and even more significantly, they are computationally expensive: a single run
of an AC procedure involves many costly runs of the target algorithm whose
performance is to be optimized in a given AC benchmark scenario. One common
workaround is to optimize cheap-to-evaluate artificial benchmark functions
(e.g., Branin) instead of actual algorithms; however, these have different
properties than realistic AC problems. Here, we propose an alternative
benchmarking approach that is similarly cheap to evaluate but much closer to
the original AC problem: replacing expensive benchmarks by surrogate benchmarks
constructed from AC benchmarks. These surrogate benchmarks approximate the
response surface corresponding to true target algorithm performance using a
regression model, and the original and surrogate benchmark share the same
(hyper-)parameter space. In our experiments, we construct and evaluate
surrogate benchmarks for hyperparameter optimization as well as for AC problems
that involve performance optimization of solvers for hard combinatorial
problems, drawing training data from the runs of existing AC procedures. We
show that our surrogate benchmarks capture overall important characteristics of
the AC scenarios, such as high- and low-performing regions, from which they
were derived, while being much easier to use and orders of magnitude cheaper to
evaluate
Automatic Algorithm Selection for Pseudo-Boolean Optimization with Given Computational Time Limits
Machine learning (ML) techniques have been proposed to automatically select
the best solver from a portfolio of solvers, based on predicted performance.
These techniques have been applied to various problems, such as Boolean
Satisfiability, Traveling Salesperson, Graph Coloring, and others.
These methods, known as meta-solvers, take an instance of a problem and a
portfolio of solvers as input. They then predict the best-performing solver and
execute it to deliver a solution. Typically, the quality of the solution
improves with a longer computational time. This has led to the development of
anytime selectors, which consider both the instance and a user-prescribed
computational time limit. Anytime meta-solvers predict the best-performing
solver within the specified time limit.
Constructing an anytime meta-solver is considerably more challenging than
building a meta-solver without the "anytime" feature. In this study, we focus
on the task of designing anytime meta-solvers for the NP-hard optimization
problem of Pseudo-Boolean Optimization (PBO), which generalizes Satisfiability
and Maximum Satisfiability problems. The effectiveness of our approach is
demonstrated via extensive empirical study in which our anytime meta-solver
improves dramatically on the performance of Mixed Integer Programming solver
Gurobi, which is the best-performing single solver in the portfolio. For
example, out of all instances and time limits for which Gurobi failed to find
feasible solutions, our meta-solver identified feasible solutions for 47% of
these
AutoFolio: An Automatically Configured Algorithm Selector (Extended Abstract)
Algorithms and the Foundations of Software technolog
sunny-as2: Enhancing SUNNY for Algorithm Selection
SUNNY is an Algorithm Selection (AS) technique originally tailored for
Constraint Programming (CP). SUNNY enables to schedule, from a portfolio of
solvers, a subset of solvers to be run on a given CP problem. This approach has
proved to be effective for CP problems, and its parallel version won many gold
medals in the Open category of the MiniZinc Challenge -- the yearly
international competition for CP solvers. In 2015, the ASlib benchmarks were
released for comparing AS systems coming from disparate fields (e.g., ASP, QBF,
and SAT) and SUNNY was extended to deal with generic AS problems. This led to
the development of sunny-as2, an algorithm selector based on SUNNY for ASlib
scenarios. A preliminary version of sunny-as2 was submitted to the Open
Algorithm Selection Challenge (OASC) in 2017, where it turned out to be the
best approach for the runtime minimization of decision problems. In this work,
we present the technical advancements of sunny-as2, including: (i)
wrapper-based feature selection; (ii) a training approach combining feature
selection and neighbourhood size configuration; (iii) the application of nested
cross-validation. We show how sunny-as2 performance varies depending on the
considered AS scenarios, and we discuss its strengths and weaknesses. Finally,
we also show how sunny-as2 improves on its preliminary version submitted to
OASC