29 research outputs found

    An iterative approach to precondition inference using constrained Horn clauses

    Get PDF
    We present a method for automatic inference of conditions on the initial states of a program that guarantee that the safety assertions in the program are not violated. Constrained Horn clauses (CHCs) are used to model the program and assertions in a uniform way, and we use standard abstract interpretations to derive an over-approximation of the set of unsafe initial states. The precondition then is the constraint corresponding to the complement of that set, under-approximating the set of safe initial states. This idea of complementation is not new, but previous attempts to exploit it have suffered from the loss of precision. Here we develop an iterative specialisation algorithm to give more precise, and in some cases optimal safety conditions. The algorithm combines existing transformations, namely constraint specialisation, partial evaluation and a trace elimination transformation. The last two of these transformations perform polyvariant specialisation, leading to disjunctive constraints which improve precision. The algorithm is implemented and tested on a benchmark suite of programs from the literature in precondition inference and software verification competitions.Comment: Paper presented at the 34nd International Conference on Logic Programming (ICLP 2018), Oxford, UK, July 14 to July 17, 2018 18 pages, LaTe

    Closure Operators for ROBDDs

    No full text

    Factorizing Equivalent Variable Pairs in ROBDD-Based Implementations of <EM>Pos</EM>

    No full text
    The subject of groundness analysis for (constraint) logic programs has been widely studied, and interesting domains have been proposed. Pos has been recognized as the most suitable domain for capturing the kind of dependencies arising in groundness analysis, and Reduced Ordered Binary Decision Diagrams (ROBDDs) are generally accepted to be the most efficient representation for Pos. Unfortunately, the size of an ROBDDs is, in the worst case, exponential in the number of variables it depends upon. Earlier work has shown that a hybrid representation that separates the definite information from the dependency information is considerably more efficient than keeping the two together. The aim of the present paper is to push this idea further, also separating out certain dependency information, in particular all pairs of variables that are always either both ground or neither ground. We find that this new hybrid representation is a significant improvement over previous work

    Un-kleene boolean equation solving

    No full text
    We present a new method for finding closed forms of recursive Boolean function definitions. Traditionally, these closed forms are found by Kleene iteration: iterative approximation until a fixed point is reached. Conceptually, our new method replaces each k-ary function by 2k Boolean constants defined by mutual recursion. The introduction of an exponential number of constants is mitigated by the simplicity of their definitions and by the use of a novel variant of ROBDDs to avoid repeated computation. Experiments suggest that this approach is significantly faster than Kleene iteration for examples that require many Kleene iteration steps.22

    Dauphin: A new statistical signal processing language

    No full text
    Many software packages support scientific research by means of numerical calculations and specialised library calls, but very few support specific application domains such as signal processing at the symbolic level or at problem formulation. Translation from the natural domain-specific structure of problem description to the computer formulation is often a time consuming and error-prone exercise. As signal processing becomes more sophisticated, there is a need to codify its basic tools, thus allowing the researcher to spend more time on the challenges specific to a particular application. In this paper, we describe the design of Dauphin, a domain-specific programming language. Dauphin ultimately aims to extend the power of signal processing researchers by allowing them to focus on their research problems while simplifying the process of implementing their ideas. In Dauphin, the basic algorithms of signal processing become the standard function calls and are expressed naturally in terms of predefined signal processing primitives such as random variables and probability distributions

    Optimal Bounds for Floating-Point Addition in Constant Time

    Get PDF
    Reasoning about floating-point numbers is notoriously difficult, owing to the lack of convenient algebraic properties such as associativity. This poses a substantial challenge for program analysis and verification tools which rely on precise floating-point constraint solving. Currently, interval methods in this domain often exhibit slow convergence even on simple examples. We present a new theorem supporting efficient computation of exact bounds of the intersection of a rectangle with the preimage of an interval under floating-point addition, in any radix or rounding mode. We thus give an efficient method of deducing optimal bounds on the components of an addition, solving the convergence problem
    corecore