54 research outputs found
Recommended from our members
Specialising finite domain programs with polyhedra
A procedure is described for tightening domain constraints of finite domain logic programs by applying a static analysis based on convex polyhedra. Individual finite domain constraints are over-approximated by polyhedra to describe the solution space over ninteger variables as an n dimensional polyhedron. This polyhedron is then approximated, using projection, as an n dimensional bounding box that can be used to specialise and improve the domain constraints. The analysis can be implemented straightforwardly and an empirical evaluation of the specialisation technique is given
A Semantic Basis for Specialising Domain Constraints
This paper formalises an analysis of finite domain programs and the resultant program transformation. The analysis adds low valency (domain) constraints to clauses in order to reduce search. The technique is outlined with a worked example and then formalised using abstract interpretation. Correctness of the analysis and of the transformation is proved
Experiments with a Convex Polyhedral Analysis Tool for Logic Programs
Convex polyhedral abstractions of logic programs have been found very useful
in deriving numeric relationships between program arguments in order to prove
program properties and in other areas such as termination and complexity
analysis. We present a tool for constructing polyhedral analyses of
(constraint) logic programs. The aim of the tool is to make available, with a
convenient interface, state-of-the-art techniques for polyhedral analysis such
as delayed widening, narrowing, "widening up-to", and enhanced automatic
selection of widening points. The tool is accessible on the web, permits user
programs to be uploaded and analysed, and is integrated with related program
transformations such as size abstractions and query-answer transformation. We
then report some experiments using the tool, showing how it can be conveniently
used to analyse transition systems arising from models of embedded systems, and
an emulator for a PIC microcontroller which is used for example in wearable
computing systems. We discuss issues including scalability, tradeoffs of
precision and computation time, and other program transformations that can
enhance the results of analysis.Comment: Paper presented at the 17th Workshop on Logic-based Methods in
Programming Environments (WLPE2007
An iterative approach to precondition inference using constrained Horn clauses
We present a method for automatic inference of conditions on the initial
states of a program that guarantee that the safety assertions in the program
are not violated. Constrained Horn clauses (CHCs) are used to model the program
and assertions in a uniform way, and we use standard abstract interpretations
to derive an over-approximation of the set of unsafe initial states. The
precondition then is the constraint corresponding to the complement of that
set, under-approximating the set of safe initial states. This idea of
complementation is not new, but previous attempts to exploit it have suffered
from the loss of precision. Here we develop an iterative specialisation
algorithm to give more precise, and in some cases optimal safety conditions.
The algorithm combines existing transformations, namely constraint
specialisation, partial evaluation and a trace elimination transformation. The
last two of these transformations perform polyvariant specialisation, leading
to disjunctive constraints which improve precision. The algorithm is
implemented and tested on a benchmark suite of programs from the literature in
precondition inference and software verification competitions.Comment: Paper presented at the 34nd International Conference on Logic
Programming (ICLP 2018), Oxford, UK, July 14 to July 17, 2018 18 pages, LaTe
Precondition Inference via Partitioning of Initial States
Precondition inference is a non-trivial task with several applications in
program analysis and verification. We present a novel iterative method for
automatically deriving sufficient preconditions for safety and unsafety of
programs which introduces a new dimension of modularity. Each iteration
maintains over-approximations of the set of \emph{safe} and \emph{unsafe}
\emph{initial} states. Then we repeatedly use the current abstractions to
partition the program's \emph{initial} states into those known to be safe,
known to be unsafe and unknown, and construct a revised program focusing on
those initial states that are not yet known to be safe or unsafe. An
experimental evaluation of the method on a set of software verification
benchmarks shows that it can solve problems which are not solvable using
previous methods.Comment: 19 pages, 8 figure
Implementing Groundness Analysis with Definite Boolean Functions
The domain of definite Boolean functions, Def, can be used to express the groundness of, and trace grounding dependencies between, program variables in (constraint) logic programs. In this paper, previously unexploited computational properties of Def are utilised to develop an efficient and succinct groundness analyser that can be coded in Prolog. In particular, entailment checking is used to prevent unnecessary least upper bound calculations. It is also demonstrated that join can be defined in terms of other operations, thereby eliminating code and removing the need for preprocessing formulae to a normal form. This saves space and time. Furthermore, the join can be adapted to straightforwardly implement the downward closure operator that arises in set sharing analyses. Experimental results indicate that the new Def implementation gives favourable results in comparison with BDD-based groundness analyses
- …