5,628 research outputs found
Combining Forward and Backward Abstract Interpretation of Horn Clauses
Alternation of forward and backward analyses is a standard technique in
abstract interpretation of programs, which is in particular useful when we wish
to prove unreachability of some undesired program states. The current
state-of-the-art technique for combining forward (bottom-up, in logic
programming terms) and backward (top-down) abstract interpretation of Horn
clauses is query-answer transformation. It transforms a system of Horn clauses,
such that standard forward analysis can propagate constraints both forward, and
backward from a goal. Query-answer transformation is effective, but has issues
that we wish to address. For that, we introduce a new backward collecting
semantics, which is suitable for alternating forward and backward abstract
interpretation of Horn clauses. We show how the alternation can be used to
prove unreachability of the goal and how every subsequent run of an analysis
yields a refined model of the system. Experimentally, we observe that combining
forward and backward analyses is important for analysing systems that encode
questions about reachability in C programs. In particular, the combination that
follows our new semantics improves the precision of our own abstract
interpreter, including when compared to a forward analysis of a
query-answer-transformed system.Comment: Francesco Ranzato. 24th International Static Analysis Symposium
(SAS), Aug 2017, New York City, United States. Springer, Static Analysi
Synthesizing Modular Invariants for Synchronous Code
In this paper, we explore different techniques to synthesize modular
invariants for synchronous code encoded as Horn clauses. Modular invariants are
a set of formulas that characterizes the validity of predicates. They are very
useful for different aspects of analysis, synthesis, testing and program
transformation. We describe two techniques to generate modular invariants for
code written in the synchronous dataflow language Lustre. The first technique
directly encodes the synchronous code in a modular fashion. While in the second
technique, we synthesize modular invariants starting from a monolithic
invariant. Both techniques, take advantage of analysis techniques based on
property-directed reachability. We also describe a technique to minimize the
synthesized invariants.Comment: In Proceedings HCVS 2014, arXiv:1412.082
A Continuous-Discontinuous Second-Order Transition in the Satisfiability of Random Horn-SAT Formulas
We compute the probability of satisfiability of a class of random Horn-SAT
formulae, motivated by a connection with the nonemptiness problem of finite
tree automata. In particular, when the maximum clause length is 3, this model
displays a curve in its parameter space along which the probability of
satisfiability is discontinuous, ending in a second-order phase transition
where it becomes continuous. This is the first case in which a phase transition
of this type has been rigorously established for a random constraint
satisfaction problem
Limits of Preprocessing
We present a first theoretical analysis of the power of polynomial-time
preprocessing for important combinatorial problems from various areas in AI. We
consider problems from Constraint Satisfaction, Global Constraints,
Satisfiability, Nonmonotonic and Bayesian Reasoning. We show that, subject to a
complexity theoretic assumption, none of the considered problems can be reduced
by polynomial-time preprocessing to a problem kernel whose size is polynomial
in a structural problem parameter of the input, such as induced width or
backdoor size. Our results provide a firm theoretical boundary for the
performance of polynomial-time preprocessing algorithms for the considered
problems.Comment: This is a slightly longer version of a paper that appeared in the
proceedings of AAAI 201
Learning programs by learning from failures
We describe an inductive logic programming (ILP) approach called learning
from failures. In this approach, an ILP system (the learner) decomposes the
learning problem into three separate stages: generate, test, and constrain. In
the generate stage, the learner generates a hypothesis (a logic program) that
satisfies a set of hypothesis constraints (constraints on the syntactic form of
hypotheses). In the test stage, the learner tests the hypothesis against
training examples. A hypothesis fails when it does not entail all the positive
examples or entails a negative example. If a hypothesis fails, then, in the
constrain stage, the learner learns constraints from the failed hypothesis to
prune the hypothesis space, i.e. to constrain subsequent hypothesis generation.
For instance, if a hypothesis is too general (entails a negative example), the
constraints prune generalisations of the hypothesis. If a hypothesis is too
specific (does not entail all the positive examples), the constraints prune
specialisations of the hypothesis. This loop repeats until either (i) the
learner finds a hypothesis that entails all the positive and none of the
negative examples, or (ii) there are no more hypotheses to test. We introduce
Popper, an ILP system that implements this approach by combining answer set
programming and Prolog. Popper supports infinite problem domains, reasoning
about lists and numbers, learning textually minimal programs, and learning
recursive programs. Our experimental results on three domains (toy game
problems, robot strategies, and list transformations) show that (i) constraints
drastically improve learning performance, and (ii) Popper can outperform
existing ILP systems, both in terms of predictive accuracies and learning
times.Comment: Accepted for the machine learning journa
Guarantees and Limits of Preprocessing in Constraint Satisfaction and Reasoning
We present a first theoretical analysis of the power of polynomial-time
preprocessing for important combinatorial problems from various areas in AI. We
consider problems from Constraint Satisfaction, Global Constraints,
Satisfiability, Nonmonotonic and Bayesian Reasoning under structural
restrictions. All these problems involve two tasks: (i) identifying the
structure in the input as required by the restriction, and (ii) using the
identified structure to solve the reasoning task efficiently. We show that for
most of the considered problems, task (i) admits a polynomial-time
preprocessing to a problem kernel whose size is polynomial in a structural
problem parameter of the input, in contrast to task (ii) which does not admit
such a reduction to a problem kernel of polynomial size, subject to a
complexity theoretic assumption. As a notable exception we show that the
consistency problem for the AtMost-NValue constraint admits a polynomial kernel
consisting of a quadratic number of variables and domain values. Our results
provide a firm worst-case guarantees and theoretical boundaries for the
performance of polynomial-time preprocessing algorithms for the considered
problems.Comment: arXiv admin note: substantial text overlap with arXiv:1104.2541,
arXiv:1104.556
Abstract Canonical Inference
An abstract framework of canonical inference is used to explore how different
proof orderings induce different variants of saturation and completeness.
Notions like completion, paramodulation, saturation, redundancy elimination,
and rewrite-system reduction are connected to proof orderings. Fairness of
deductive mechanisms is defined in terms of proof orderings, distinguishing
between (ordinary) "fairness," which yields completeness, and "uniform
fairness," which yields saturation.Comment: 28 pages, no figures, to appear in ACM Trans. on Computational Logi
- …