407 research outputs found
Convex polyhedral abstractions, specialisation and property-based predicate splitting in Horn clause verification
We present an approach to constrained Horn clause (CHC) verification
combining three techniques: abstract interpretation over a domain of convex
polyhedra, specialisation of the constraints in CHCs using abstract
interpretation of query-answer transformed clauses, and refinement by splitting
predicates. The purpose of the work is to investigate how analysis and
transformation tools developed for constraint logic programs (CLP) can be
applied to the Horn clause verification problem. Abstract interpretation over
convex polyhedra is capable of deriving sophisticated invariants and when used
in conjunction with specialisation for propagating constraints it can
frequently solve challenging verification problems. This is a contribution in
itself, but refinement is needed when it fails, and the question of how to
refine convex polyhedral analyses has not been studied much. We present a
refinement technique based on interpolants derived from a counterexample trace;
these are used to drive a property-based specialisation that splits predicates,
leading in turn to more precise convex polyhedral analyses. The process of
specialisation, analysis and splitting can be repeated, in a manner similar to
the CEGAR and iterative specialisation approaches.Comment: In Proceedings HCVS 2014, arXiv:1412.082
Learning programs by learning from failures
We describe an inductive logic programming (ILP) approach called learning
from failures. In this approach, an ILP system (the learner) decomposes the
learning problem into three separate stages: generate, test, and constrain. In
the generate stage, the learner generates a hypothesis (a logic program) that
satisfies a set of hypothesis constraints (constraints on the syntactic form of
hypotheses). In the test stage, the learner tests the hypothesis against
training examples. A hypothesis fails when it does not entail all the positive
examples or entails a negative example. If a hypothesis fails, then, in the
constrain stage, the learner learns constraints from the failed hypothesis to
prune the hypothesis space, i.e. to constrain subsequent hypothesis generation.
For instance, if a hypothesis is too general (entails a negative example), the
constraints prune generalisations of the hypothesis. If a hypothesis is too
specific (does not entail all the positive examples), the constraints prune
specialisations of the hypothesis. This loop repeats until either (i) the
learner finds a hypothesis that entails all the positive and none of the
negative examples, or (ii) there are no more hypotheses to test. We introduce
Popper, an ILP system that implements this approach by combining answer set
programming and Prolog. Popper supports infinite problem domains, reasoning
about lists and numbers, learning textually minimal programs, and learning
recursive programs. Our experimental results on three domains (toy game
problems, robot strategies, and list transformations) show that (i) constraints
drastically improve learning performance, and (ii) Popper can outperform
existing ILP systems, both in terms of predictive accuracies and learning
times.Comment: Accepted for the machine learning journa
An iterative approach to precondition inference using constrained Horn clauses
We present a method for automatic inference of conditions on the initial
states of a program that guarantee that the safety assertions in the program
are not violated. Constrained Horn clauses (CHCs) are used to model the program
and assertions in a uniform way, and we use standard abstract interpretations
to derive an over-approximation of the set of unsafe initial states. The
precondition then is the constraint corresponding to the complement of that
set, under-approximating the set of safe initial states. This idea of
complementation is not new, but previous attempts to exploit it have suffered
from the loss of precision. Here we develop an iterative specialisation
algorithm to give more precise, and in some cases optimal safety conditions.
The algorithm combines existing transformations, namely constraint
specialisation, partial evaluation and a trace elimination transformation. The
last two of these transformations perform polyvariant specialisation, leading
to disjunctive constraints which improve precision. The algorithm is
implemented and tested on a benchmark suite of programs from the literature in
precondition inference and software verification competitions.Comment: Paper presented at the 34nd International Conference on Logic
Programming (ICLP 2018), Oxford, UK, July 14 to July 17, 2018 18 pages, LaTe
Precondition Inference via Partitioning of Initial States
Precondition inference is a non-trivial task with several applications in
program analysis and verification. We present a novel iterative method for
automatically deriving sufficient preconditions for safety and unsafety of
programs which introduces a new dimension of modularity. Each iteration
maintains over-approximations of the set of \emph{safe} and \emph{unsafe}
\emph{initial} states. Then we repeatedly use the current abstractions to
partition the program's \emph{initial} states into those known to be safe,
known to be unsafe and unknown, and construct a revised program focusing on
those initial states that are not yet known to be safe or unsafe. An
experimental evaluation of the method on a set of software verification
benchmarks shows that it can solve problems which are not solvable using
previous methods.Comment: 19 pages, 8 figure
Learning to Understand by Evolving Theories
In this paper, we describe an approach that enables an autonomous system to
infer the semantics of a command (i.e. a symbol sequence representing an
action) in terms of the relations between changes in the observations and the
action instances. We present a method of how to induce a theory (i.e. a
semantic description) of the meaning of a command in terms of a minimal set of
background knowledge. The only thing we have is a sequence of observations from
which we extract what kinds of effects were caused by performing the command.
This way, we yield a description of the semantics of the action and, hence, a
definition.Comment: KRR Workshop at ICLP 201
Constructive approaches to Program Induction
Search is a key technique in artificial intelligence, machine learning and Program Induction. No
matter how efficient a search procedure, there exist spaces that are too large to search effectively
and they include the search space of programs. In this dissertation we show that in the context
of logic-program induction (Inductive Logic Programming, or ILP) it is not necessary to search
for a correct program, because if one exists, there also exists a unique object that is the most
general correct program, and that can be constructed directly, without a search, in polynomial
time and from a polynomial number of examples. The existence of this unique object, that we
term the Top Program because of its maximal generality, does not so much solve the problem
of searching a large program search space, as it completely sidesteps it, thus improving the
efficiency of the learning task by orders of magnitude commensurate with the complexity of a
program space search.
The existence of a unique Top Program and the ability to construct it given finite resources
relies on the imposition, on the language of hypotheses, from which programs are constructed,
of a strong inductive bias with relevance to the learning task. In common practice, in machine
learning, Program Induction and ILP, such relevant inductive bias is selected, or created,
manually, by the human user of a learning system, with intuition or knowledge of the problem
domain, and in the form of various kinds of program templates. In this dissertation we show
that by abandoning the reliance on such extra-logical devices as program templates, and instead
defining inductive bias exclusively as First- and Higher-Order Logic formulae, it is possible to
learn inductive bias itself from examples, automatically, and efficiently, by Higher-Order Top
Program construction.
In Chapter 4 we describe the Top Program in the context of the Meta-Interpretive Learning
approach to ILP (MIL) and describe an algorithm for its construction, the Top Program
Construction algorithm (TPC). We prove the efficiency and accuracy of TPC and describe
its implementation in a new MIL system called Louise. We support theoretical results with
experiments comparing Louise to the state-of-the-art, search-based MIL system, Metagol, and
find that Louise improves Metagol’s efficiency and accuracy. In Chapter 5 we re-frame MIL as
specialisation of metarules, Second-Order clauses used as inductive bias in MIL, and prove that
problem-specific metarules can be derived by specialisation of maximally general metarules, by
MIL. We describe a sub-system of Louise, called TOIL, that learns new metarules by MIL and
demonstrate empirically that the metarules learned by TOIL match those selected manually,
while maintaining the accuracy and efficiency of learning.
iOpen Acces
- …