18 research outputs found
On QBF Proofs and Preprocessing
QBFs (quantified boolean formulas), which are a superset of propositional
formulas, provide a canonical representation for PSPACE problems. To overcome
the inherent complexity of QBF, significant effort has been invested in
developing QBF solvers as well as the underlying proof systems. At the same
time, formula preprocessing is crucial for the application of QBF solvers. This
paper focuses on a missing link in currently-available technology: How to
obtain a certificate (e.g. proof) for a formula that had been preprocessed
before it was given to a solver? The paper targets a suite of commonly-used
preprocessing techniques and shows how to reconstruct certificates for them. On
the negative side, the paper discusses certain limitations of the
currently-used proof systems in the light of preprocessing. The presented
techniques were implemented and evaluated in the state-of-the-art QBF
preprocessor bloqqer.Comment: LPAR 201
Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates
The optimization of algorithm (hyper-)parameters is crucial for achieving
peak performance across a wide range of domains, ranging from deep neural
networks to solvers for hard combinatorial problems. The resulting algorithm
configuration (AC) problem has attracted much attention from the machine
learning community. However, the proper evaluation of new AC procedures is
hindered by two key hurdles. First, AC benchmarks are hard to set up. Second
and even more significantly, they are computationally expensive: a single run
of an AC procedure involves many costly runs of the target algorithm whose
performance is to be optimized in a given AC benchmark scenario. One common
workaround is to optimize cheap-to-evaluate artificial benchmark functions
(e.g., Branin) instead of actual algorithms; however, these have different
properties than realistic AC problems. Here, we propose an alternative
benchmarking approach that is similarly cheap to evaluate but much closer to
the original AC problem: replacing expensive benchmarks by surrogate benchmarks
constructed from AC benchmarks. These surrogate benchmarks approximate the
response surface corresponding to true target algorithm performance using a
regression model, and the original and surrogate benchmark share the same
(hyper-)parameter space. In our experiments, we construct and evaluate
surrogate benchmarks for hyperparameter optimization as well as for AC problems
that involve performance optimization of solvers for hard combinatorial
problems, drawing training data from the runs of existing AC procedures. We
show that our surrogate benchmarks capture overall important characteristics of
the AC scenarios, such as high- and low-performing regions, from which they
were derived, while being much easier to use and orders of magnitude cheaper to
evaluate
The Configurable SAT Solver Challenge (CSSC)
It is well known that different solution strategies work well for different
types of instances of hard combinatorial problems. As a consequence, most
solvers for the propositional satisfiability problem (SAT) expose parameters
that allow them to be customized to a particular family of instances. In the
international SAT competition series, these parameters are ignored: solvers are
run using a single default parameter setting (supplied by the authors) for all
benchmark instances in a given track. While this competition format rewards
solvers with robust default settings, it does not reflect the situation faced
by a practitioner who only cares about performance on one particular
application and can invest some time into tuning solver parameters for this
application. The new Configurable SAT Solver Competition (CSSC) compares
solvers in this latter setting, scoring each solver by the performance it
achieved after a fully automated configuration step. This article describes the
CSSC in more detail, and reports the results obtained in its two instantiations
so far, CSSC 2013 and 2014
Automated metamorphic testing of variability analysis tools
Variability determines the capability of software applications to be configured and customized. A common
need during the development of variability–intensive systems is the automated analysis of their underlying
variability models, e.g. detecting contradictory configuration options. The analysis operations that are
performed on variability models are often very complex, which hinders the testing of the corresponding
analysis tools and makes difficult, often infeasible, to determine the correctness of their outputs, i.e.
the well–known oracle problem in software testing. In this article, we present a generic approach for
the automated detection of faults in variability analysis tools overcoming the oracle problem. Our work
enables the generation of random variability models together with the exact set of valid configurations
represented by these models. These test data are generated from scratch using step–wise transformations
and assuring that certain constraints (a.k.a. metamorphic relations) hold at each step. To show the feasibility
and generalizability of our approach, it has been used to automatically test several analysis tools in three
variability domains: feature models, CUDF documents and Boolean formulas. Among other results, we
detected 19 real bugs in 7 out of the 15 tools under test.CICYT TIN2012-32273CICYT IPT-2012- 0890-390000Junta de Andalucía TIC-5906Junta de Andalucía P12-TIC- 186