161 research outputs found
Efficient Certified Resolution Proof Checking
We present a novel propositional proof tracing format that eliminates complex
processing, thus enabling efficient (formal) proof checking. The benefits of
this format are demonstrated by implementing a proof checker in C, which
outperforms a state-of-the-art checker by two orders of magnitude. We then
formalize the theory underlying propositional proof checking in Coq, and
extract a correct-by-construction proof checker for our format from the
formalization. An empirical evaluation using 280 unsatisfiable instances from
the 2015 and 2016 SAT competitions shows that this certified checker usually
performs comparably to a state-of-the-art non-certified proof checker. Using
this format, we formally verify the recent 200 TB proof of the Boolean
Pythagorean Triples conjecture
Efficient Certified RAT Verification
Clausal proofs have become a popular approach to validate the results of SAT
solvers. However, validating clausal proofs in the most widely supported format
(DRAT) is expensive even in highly optimized implementations. We present a new
format, called LRAT, which extends the DRAT format with hints that facilitate a
simple and fast validation algorithm. Checking validity of LRAT proofs can be
implemented using trusted systems such as the languages supported by theorem
provers. We demonstrate this by implementing two certified LRAT checkers, one
in Coq and one in ACL2
Solving and Verifying the Boolean Pythagorean Triples Problem via Cube-and-Conquer
We solved a long-outstanding open problem in Ramsey theory, using SAT solving
Entanglement dynamics of two qubits under the influence of external kicks and Gaussian pulses
We have investigated the dynamics of entanglement between two spin-1/2 qubits
that are subject to independent kick and Gaussian pulse type external magnetic
fields analytically as well as numerically. Dyson time ordering effect on the
dynamics is found to be important for the sequence of kicks. We show that
"almost-steady" high entanglement can be created between two initially
unentangled qubits by using carefully designed kick or pulse sequences
Evaluating QBF Solvers: Quantifier Alternations Matter
We present an experimental study of the effects of quantifier alternations on
the evaluation of quantified Boolean formula (QBF) solvers. The number of
quantifier alternations in a QBF in prenex conjunctive normal form (PCNF) is
directly related to the theoretical hardness of the respective QBF
satisfiability problem in the polynomial hierarchy. We show empirically that
the performance of solvers based on different solving paradigms substantially
varies depending on the numbers of alternations in PCNFs. In related
theoretical work, quantifier alternations have become the focus of
understanding the strengths and weaknesses of various QBF proof systems
implemented in solvers. Our results motivate the development of methods to
evaluate orthogonal solving paradigms by taking quantifier alternations into
account. This is necessary to showcase the broad range of existing QBF solving
paradigms for practical QBF applications. Moreover, we highlight the potential
of combining different approaches and QBF proof systems in solvers.Comment: preprint of a paper to be published at CP 2018, LNCS, Springer,
including appendi
DepQBF 6.0: A Search-Based QBF Solver Beyond Traditional QCDCL
We present the latest major release version 6.0 of the quantified Boolean
formula (QBF) solver DepQBF, which is based on QCDCL. QCDCL is an extension of
the conflict-driven clause learning (CDCL) paradigm implemented in state of the
art propositional satisfiability (SAT) solvers. The Q-resolution calculus
(QRES) is a QBF proof system which underlies QCDCL. QCDCL solvers can produce
QRES proofs of QBFs in prenex conjunctive normal form (PCNF) as a byproduct of
the solving process. In contrast to traditional QCDCL based on QRES, DepQBF 6.0
implements a variant of QCDCL which is based on a generalization of QRES. This
generalization is due to a set of additional axioms and leaves the original
Q-resolution rules unchanged. The generalization of QRES enables QCDCL to
potentially produce exponentially shorter proofs than the traditional variant.
We present an overview of the features implemented in DepQBF and report on
experimental results which demonstrate the effectiveness of generalized QRES in
QCDCL.Comment: 12 pages + appendix; to appear in the proceedings of CADE-26, LNCS,
Springer, 201
Encoding Redundancy for Satisfaction-Driven Clause Learning
Satisfaction-Driven Clause Learning (SDCL) is a recent SAT
solving paradigm that aggressively trims the search space of possible truth assignments. To determine if the SAT solver is currently exploring a dispensable part of the search space, SDCL uses the so-called positive reduct of a formula: The positive reduct is an easily solvable propositional formula that is satisfiable if the current assignment of the solver can be safely pruned from the search space. In this paper, we present two novel variants of the positive reduct that allow for even more aggressive pruning. Using one of these variants allows SDCL to solve harder problems, in particular the well-known Tseitin formulas and mutilated chessboard problems. For the first time, we are able to generate and automatically check clausal proofs for large instances of these problems
Intravenous to Oral Antimicrobial Stepdown Implementation at the Calgary General Hospital, Calgary, Alberta
Derivation reduction of metarules in meta-interpretive learning
Meta-interpretive learning (MIL) is a form of inductive logic programming. MIL uses second-order Horn clauses, called metarules, as a form of declarative bias. Metarules define the structures of learnable programs and thus the hypothesis space. Deciding which metarules to use is a trade-off between efficiency and expressivity. The hypothesis space increases given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. A recent paper used Progol’s entailment reduction algorithm to identify irreducible, or minimal, sets of metarules. In some cases, as few as two metarules were shown to be sufficient to entail all hypotheses in an infinite language. Moreover, it was shown that compared to non-minimal sets, learning with minimal sets of metarules improves predictive accuracies and lowers learning times. In this paper, we show that entailment reduction can be too strong and can remove metarules necessary to make a hypothesis more specific. We describe a new reduction technique based on derivations. Specifically, we introduce the derivation reduction problem, the problem of finding a finite subset of a Horn theory from which the whole theory can be derived using SLD-resolution. We describe a derivation reduction algorithm which we use to reduce sets of metarules. We also theoretically study whether certain sets of metarules can be derivationally reduced to minimal finite subsets. Our experiments compare learning with entailment and derivation reduced sets of metarules. In general, using derivation reduced sets of metarules outperforms using entailment reduced sets of metarules, both in terms of predictive accuracies and learning times
A Study of the Learnability of Relational Properties: Model Counting Meets Machine Learning (MCML)
This paper introduces the MCML approach for empirically studying the
learnability of relational properties that can be expressed in the well-known
software design language Alloy. A key novelty of MCML is quantification of the
performance of and semantic differences among trained machine learning (ML)
models, specifically decision trees, with respect to entire (bounded) input
spaces, and not just for given training and test datasets (as is the common
practice). MCML reduces the quantification problems to the classic complexity
theory problem of model counting, and employs state-of-the-art model counters.
The results show that relatively simple ML models can achieve surprisingly high
performance (accuracy and F1-score) when evaluated in the common setting of
using training and test datasets - even when the training dataset is much
smaller than the test dataset - indicating the seeming simplicity of learning
relational properties. However, MCML metrics based on model counting show that
the performance can degrade substantially when tested against the entire
(bounded) input space, indicating the high complexity of precisely learning
these properties, and the usefulness of model counting in quantifying the true
performance
- …
