13,618 research outputs found

    Query DAGs: A Practical Paradigm for Implementing Belief-Network Inference

    Full text link
    We describe a new paradigm for implementing inference in belief networks, which consists of two steps: (1) compiling a belief network into an arithmetic expression called a Query DAG (Q-DAG); and (2) answering queries using a simple evaluation algorithm. Each node of a Q-DAG represents a numeric operation, a number, or a symbol for evidence. Each leaf node of a Q-DAG represents the answer to a network query, that is, the probability of some event of interest. It appears that Q-DAGs can be generated using any of the standard algorithms for exact inference in belief networks (we show how they can be generated using clustering and conditioning algorithms). The time and space complexity of a Q-DAG generation algorithm is no worse than the time complexity of the inference algorithm on which it is based. The complexity of a Q-DAG evaluation algorithm is linear in the size of the Q-DAG, and such inference amounts to a standard evaluation of the arithmetic expression it represents. The intended value of Q-DAGs is in reducing the software and hardware resources required to utilize belief networks in on-line, real-world applications. The proposed framework also facilitates the development of on-line inference on different software and hardware platforms due to the simplicity of the Q-DAG evaluation algorithm. Interestingly enough, Q-DAGs were found to serve other purposes: simple techniques for reducing Q-DAGs tend to subsume relatively complex optimization techniques for belief-network inference, such as network-pruning and computation-caching.Comment: See http://www.jair.org/ for any accompanying file

    Bounded Quantifier Instantiation for Checking Inductive Invariants

    Full text link
    We consider the problem of checking whether a proposed invariant φ\varphi expressed in first-order logic with quantifier alternation is inductive, i.e. preserved by a piece of code. While the problem is undecidable, modern SMT solvers can sometimes solve it automatically. However, they employ powerful quantifier instantiation methods that may diverge, especially when φ\varphi is not preserved. A notable difficulty arises due to counterexamples of infinite size. This paper studies Bounded-Horizon instantiation, a natural method for guaranteeing the termination of SMT solvers. The method bounds the depth of terms used in the quantifier instantiation process. We show that this method is surprisingly powerful for checking quantified invariants in uninterpreted domains. Furthermore, by producing partial models it can help the user diagnose the case when φ\varphi is not inductive, especially when the underlying reason is the existence of infinite counterexamples. Our main technical result is that Bounded-Horizon is at least as powerful as instrumentation, which is a manual method to guarantee convergence of the solver by modifying the program so that it admits a purely universal invariant. We show that with a bound of 1 we can simulate a natural class of instrumentations, without the need to modify the code and in a fully automatic way. We also report on a prototype implementation on top of Z3, which we used to verify several examples by Bounded-Horizon of bound 1

    On Deciding Local Theory Extensions via E-matching

    Full text link
    Satisfiability Modulo Theories (SMT) solvers incorporate decision procedures for theories of data types that commonly occur in software. This makes them important tools for automating verification problems. A limitation frequently encountered is that verification problems are often not fully expressible in the theories supported natively by the solvers. Many solvers allow the specification of application-specific theories as quantified axioms, but their handling is incomplete outside of narrow special cases. In this work, we show how SMT solvers can be used to obtain complete decision procedures for local theory extensions, an important class of theories that are decidable using finite instantiation of axioms. We present an algorithm that uses E-matching to generate instances incrementally during the search, significantly reducing the number of generated instances compared to eager instantiation strategies. We have used two SMT solvers to implement this algorithm and conducted an extensive experimental evaluation on benchmarks derived from verification conditions for heap-manipulating programs. We believe that our results are of interest to both the users of SMT solvers as well as their developers

    Constraints on predicate invention

    Get PDF
    This chapter describes an inductive learning method that derives logic programs and invents predicates when needed. The basic idea is to form the least common anti-instance (LCA) of selected seed examples. If the LCA is too general it forms the starting poínt of a gneral-to-specific search which is guided by various constraints on argument dependencies and critical terms. A distinguishing feature of the method is its ability to introduce new predicates. Predicate invention involves three steps. First, the need for a new predicate is discovered and the arguments of the new predicate are determíned using the same constraints that guide the search. In the second step, instances of the new predicate are abductively inferred. These instances form the input for the last step where the definition of the new predicate is induced by recursively applying the method again. We also outline how such a system could be more tightly integrated with an abductive learning system

    Second-Order Functions and Theorems in ACL2

    Full text link
    SOFT ('Second-Order Functions and Theorems') is a tool to mimic second-order functions and theorems in the first-order logic of ACL2. Second-order functions are mimicked by first-order functions that reference explicitly designated uninterpreted functions that mimic function variables. First-order theorems over these second-order functions mimic second-order theorems universally quantified over function variables. Instances of second-order functions and theorems are systematically generated by replacing function variables with functions. SOFT can be used to carry out program refinement inside ACL2, by constructing a sequence of increasingly stronger second-order predicates over one or more target functions: the sequence starts with a predicate that specifies requirements for the target functions, and ends with a predicate that provides executable definitions for the target functions.Comment: In Proceedings ACL2 2015, arXiv:1509.0552
    corecore