2,802 research outputs found

    Combining Forward and Backward Abstract Interpretation of Horn Clauses

    Full text link
    Alternation of forward and backward analyses is a standard technique in abstract interpretation of programs, which is in particular useful when we wish to prove unreachability of some undesired program states. The current state-of-the-art technique for combining forward (bottom-up, in logic programming terms) and backward (top-down) abstract interpretation of Horn clauses is query-answer transformation. It transforms a system of Horn clauses, such that standard forward analysis can propagate constraints both forward, and backward from a goal. Query-answer transformation is effective, but has issues that we wish to address. For that, we introduce a new backward collecting semantics, which is suitable for alternating forward and backward abstract interpretation of Horn clauses. We show how the alternation can be used to prove unreachability of the goal and how every subsequent run of an analysis yields a refined model of the system. Experimentally, we observe that combining forward and backward analyses is important for analysing systems that encode questions about reachability in C programs. In particular, the combination that follows our new semantics improves the precision of our own abstract interpreter, including when compared to a forward analysis of a query-answer-transformed system.Comment: Francesco Ranzato. 24th International Static Analysis Symposium (SAS), Aug 2017, New York City, United States. Springer, Static Analysi

    Probabilistic Programming Concepts

    Full text link
    A multitude of different probabilistic programming languages exists today, all extending a traditional programming language with primitives to support modeling of complex, structured probability distributions. Each of these languages employs its own probabilistic primitives, and comes with a particular syntax, semantics and inference procedure. This makes it hard to understand the underlying programming concepts and appreciate the differences between the different languages. To obtain a better understanding of probabilistic programming, we identify a number of core programming concepts underlying the primitives used by various probabilistic languages, discuss the execution mechanisms that they require and use these to position state-of-the-art probabilistic languages and their implementation. While doing so, we focus on probabilistic extensions of logic programming languages such as Prolog, which have been developed since more than 20 years

    Integrating bottom-up and top-down reasoning in COLAB

    Get PDF
    The knowledge compilation laboratory COLAB integrates declarative knowledge representation formalisms, providing source-to-source and source-to-code compilers of various knowledge types. Its architecture separates taxonomical and assertional knowledge. The assertional component consists of a constraint system and a rule system, which supports bottom-up and top-down reasoning of Horn clauses. Two approaches for forward reasoning have been implemented. The first set-oriented approach uses a fixpoint computation. It allows top-down verification of selected premises. Goal-directed bottom-up reasoning is achieved by a magic-set transformation of the rules with respect to a goal. The second tuple-oriented approach reasons forward to derive the consequences of an explicitly given set of facts. This is achieved by a transformation of the rules to top-down executable Horn clauses. The paper gives an overview of the various forward reasoning approaches, their compilation into an abstract machine and their integration into the COLAB shell

    COLAB : a hybrid knowledge representation and compilation laboratory

    Get PDF
    Knowledge bases for real-world domains such as mechanical engineering require expressive and efficient representation and processing tools. We pursue a declarative-compilative approach to knowledge engineering. While Horn logic (as implemented in PROLOG) is well-suited for representing relational clauses, other kinds of declarative knowledge call for hybrid extensions: functional dependencies and higher-order knowledge should be modeled directly. Forward (bottom-up) reasoning should be integrated with backward (top-down) reasoning. Constraint propagation should be used wherever possible instead of search-intensive resolution. Taxonomic knowledge should be classified into an intuitive subsumption hierarchy. Our LISP-based tools provide direct translators of these declarative representations into abstract machines such as an extended Warren Abstract Machine (WAM) and specialized inference engines that are interfaced to each other. More importantly, we provide source-to-source transformers between various knowledge types, both for user convenience and machine efficiency. These formalisms with their translators and transformers have been developed as part of COLAB, a compilation laboratory for studying what we call, respectively, "vertical\u27; and "horizontal\u27; compilation of knowledge, as well as for exploring the synergetic collaboration of the knowledge representation formalisms. A case study in the realm of mechanical engineering has been an important driving force behind the development of COLAB. It will be used as the source of examples throughout the paper when discussing the enhanced formalisms, the hybrid representation architecture, and the compilers

    An iterative approach to precondition inference using constrained Horn clauses

    Get PDF
    We present a method for automatic inference of conditions on the initial states of a program that guarantee that the safety assertions in the program are not violated. Constrained Horn clauses (CHCs) are used to model the program and assertions in a uniform way, and we use standard abstract interpretations to derive an over-approximation of the set of unsafe initial states. The precondition then is the constraint corresponding to the complement of that set, under-approximating the set of safe initial states. This idea of complementation is not new, but previous attempts to exploit it have suffered from the loss of precision. Here we develop an iterative specialisation algorithm to give more precise, and in some cases optimal safety conditions. The algorithm combines existing transformations, namely constraint specialisation, partial evaluation and a trace elimination transformation. The last two of these transformations perform polyvariant specialisation, leading to disjunctive constraints which improve precision. The algorithm is implemented and tested on a benchmark suite of programs from the literature in precondition inference and software verification competitions.Comment: Paper presented at the 34nd International Conference on Logic Programming (ICLP 2018), Oxford, UK, July 14 to July 17, 2018 18 pages, LaTe

    Upside-down Deduction

    Get PDF
    Over the recent years, several proposals were made to enhance database systems with automated reasoning. In this article we analyze two such enhancements based on meta-interpretation. We consider on the one hand the theorem prover Satchmo, on the other hand the Alexander and Magic Set methods. Although they achieve different goals and are based on distinct reasoning paradigms, Satchmo and the Alexander or Magic Set methods can be similarly described by upside-down meta-interpreters, i.e., meta-interpreters implementing one reasoning principle in terms of the other. Upside-down meta-interpretation gives rise to simple and efficient implementations, but has not been investigated in the past. This article is devoted to studying this technique. We show that it permits one to inherit a search strategy from an inference engine, instead of implementing it, and to combine bottom-up and top-down reasoning. These properties yield an explanation for the efficiency of Satchmo and a justification for the unconventional approach to top-down reasoning of the Alexander and Magic Set methods

    ARC-TEC : acquisition, representation and compilation of technical knowledge

    Get PDF
    A global description of an expert system shell for the domain of mechanical engineering is presented. The ARC-TEC project constitutes an AI approach to realize the CIM idea. Along with conceptual solutions, it provides a continuous sequence of software tools for the acquisition, representation and compilation of technical knowledge. The shell combines the KADS knowledge-acquisition methodology, the KL-ONE representation theory and the WAM compilation technology. For its evaluation a prototypical expert system for production planning is developed. A central part of the system is a knowledge base formalizing the relevant aspects of common sense in mechanical engineering. Thus, ARC-TEC is less general than the CYC project but broader than specific expert systems for planning or diagnosis

    Query Evaluation in Recursive Databases

    Get PDF
    corecore