968 research outputs found
Reason Maintenance - State of the Art
This paper describes state of the art in reason maintenance with a focus on its future usage in the KiWi project. To give a bigger picture of the field, it also mentions closely related issues such as non-monotonic logic and paraconsistency. The paper is organized as follows: first, two motivating scenarios referring to semantic wikis are presented which are then used to introduce the different reason maintenance techniques
A DPLL(T) Framework for Verifying Deep Neural Networks
Deep Neural Networks (DNNs) have emerged as an effective approach to tackling
real-world problems. However, like human-written software,
automatically-generated DNNs can have bugs and be attacked. This thus attracts
many recent interests in developing effective and scalable DNN verification
techniques and tools. In this work, we introduce a NeuralSAT, a new constraint
solving approach to DNN verification. The design of NeuralSAT follows the
DPLL(T) algorithm used modern SMT solving, which includes (conflict) clause
learning, abstraction, and theory solving, and thus NeuralSAT can be considered
as an SMT framework for DNNs. Preliminary results show that the NeuralSAT
prototype is competitive to the state-of-the-art. We hope, with proper
optimization and engineering, NeuralSAT will carry the power and success of
modern SAT/SMT solvers to DNN verification. NeuralSAT is avaliable from:
https://github.com/dynaroars/neuralsat-solverComment: 27 pages, 8 figures. NeuralSAT is avaliable from:
https://github.com/dynaroars/neuralsat-solve
Distribution-Aware Sampling and Weighted Model Counting for SAT
Given a CNF formula and a weight for each assignment of values to variables,
two natural problems are weighted model counting and distribution-aware
sampling of satisfying assignments. Both problems have a wide variety of
important applications. Due to the inherent complexity of the exact versions of
the problems, interest has focused on solving them approximately. Prior work in
this area scaled only to small problems in practice, or failed to provide
strong theoretical guarantees, or employed a computationally-expensive maximum
a posteriori probability (MAP) oracle that assumes prior knowledge of a
factored representation of the weight distribution. We present a novel approach
that works with a black-box oracle for weights of assignments and requires only
an {\NP}-oracle (in practice, a SAT-solver) to solve both the counting and
sampling problems. Our approach works under mild assumptions on the
distribution of weights of satisfying assignments, provides strong theoretical
guarantees, and scales to problems involving several thousand variables. We
also show that the assumptions can be significantly relaxed while improving
computational efficiency if a factored representation of the weights is known.Comment: This is a full version of AAAI 2014 pape
Automatic plan generation and adaptation by observation : supporting complex human planning
Tese de doutoramento. Engenharia Informática. Universidade do Porto. Faculdade de Engenharia. 201
SAT and CP: Parallelisation and Applications
This thesis is considered with the parallelisation of solvers which search for either an arbitrary, or an optimum, solution to a problem stated in some formal way. We discuss the parallelisation of two solvers, and their application in three chapters.In the first chapter, we consider SAT, the decision problem of propositional logic, and algorithms for showing the satisfiability or unsatisfiability of propositional formulas. We sketch some proof-theoretic foundations which are related to the strength of different algorithmic approaches. Furthermore, we discuss details of the implementations of SAT solvers, and show how to improve upon existing sequential solvers. Lastly, we discuss the parallelisation of these solvers with a focus on clause exchange, the communication of intermediate results within a parallel solver. The second chapter is concerned with Contraint Programing (CP) with learning. Contrary to classical Constraint Programming techniques, this incorporates learning mechanisms as they are used in the field of SAT solving. We present results from parallelising CHUFFED, a learning CP solver. As this is both a kind of CP and SAT solver, it is not clear which parallelisation approaches work best here. In the final chapter, we will discuss Sorting networks, which are data oblivious sorting algorithms, i. e., the comparisons they perform do not depend on the input data. Their independence of the input data lends them to parallel implementation. We consider the question how many parallel sorting steps are needed to sort some inputs, and present both lower and upper bounds for several cases
Why solutions can be hard to find: a featural theory of cost for a local search algorithm on random satisfiability instances
The local search algorithm WSat is one of the most successful algorithms for solving
the archetypal NP-complete problem of satisfiability (SAT). It is notably effective at
solving Random-3-SAT instances near the so-called 'satisfiability threshold', which
are thought to be universally hard. However, WSat still shows a peak in search
cost near the threshold and large variations in cost over different instances. Why
are solutions to the threshold instances so hard to find using WSat? What features
characterise threshold instances which make them difficult for WSat to solve?
We make a number of significant contributions to the analysis of WSat on these
high-cost random instances, using the recently-introduced concept of the backbone
of a SAT instance. The backbone is the set of literals which are implicates of an
instance. We find that the number of solutions predicts the cost well for small-backbone
instances but is much less relevant for the large-backbone instances which appear near
the threshold and dominate in the overconstrained region. We undertake a detailed
study of the behaviour of the algorithm during search and uncover some interesting
patterns. These patterns lead us to introduce a measure of the backbone fragility of
an instance, which indicates how persistent the backbone is as clauses are removed.
We propose that high-cost random instances for WSat are those with large backbones
which are also backbone-fragile. We suggest that the decay in cost for WSat beyond
the satisfiability threshold, which has perplexed a number of researchers, is due to the
decreasing backbone fragility. Our hypothesis makes three correct predictions. First,
that a measure of the backbone robustness of an instance (the opposite to backbone
fragility) is negatively correlated with the WSat cost when other factors are controlled
for. Second, that backbone-minimal instances (which are 3-SAT instances altered so
as to be more backbone-fragile) are unusually hard for WSat. Third, that the clauses
most often unsatisfied during search are those whose deletion has the most effect on
the backbone.
Our analysis of WSat on random-3-SAT threshold instances can be seen as a featural
theory of WSat cost, predicting features of cost behaviour from structural features of
SAT instances. In this thesis, we also present some initial studies which investigate
whether the scope of this featural theory can be broadened to other kinds of random
SAT instance. random-2+p-SAT interpolates between the polynomial-time problem
Random-2-SAT when p = 0 and Random-3-SAT when p = 1. At some value
p ~ pq ~ 0.41, a dramatic change in the structural nature of instances is predicted by
statistical mechanics methods, which may imply the appearance of backbone fragile
instances. We tested NovELTY+, a recent variant of WSat, on rand o m- 2 +p-SAT
and find some evidence that growth of its median cost changes from polynomial to
superpolynomial between p = 0.3 and p = 0.5. We also find evidence that it is the
onset of backbone fragility which is the cause of this change in cost scaling: typical
instances at p — 0.5 are more backbone-fragile than their counterparts at p — 0.3.
Not-All-Equal (NAE) 3-SAT is a variant of the SAT problem which is similar
to it in most respects. However, for NAE 3-SAT instances no implicate literals are
possible. Hence the backbone for NAE 3-SAT must be redefined. We show that under
a redefinition of the backbone, the pattern of factors influencing WSat cost at the
NAE Random-3-SAT threshold is much the same as in Random-3-SAT, including
the role of backbone fragility
- …