85,266 research outputs found

    Context-Specific Approximation in Probabilistic Inference

    Full text link
    There is evidence that the numbers in probabilistic inference don't really matter. This paper considers the idea that we can make a probabilistic model simpler by making fewer distinctions. Unfortunately, the level of a Bayesian network seems too coarse; it is unlikely that a parent will make little difference for all values of the other parents. In this paper we consider an approximation scheme where distinctions can be ignored in some contexts, but not in other contexts. We elaborate on a notion of a parent context that allows a structured context-specific decomposition of a probability distribution and the associated probabilistic inference scheme called probabilistic partial evaluation (Poole 1997). This paper shows a way to simplify a probabilistic model by ignoring distinctions which have similar probabilities, a method to exploit the simpler model, a bound on the resulting errors, and some preliminary empirical results on simple networks.Comment: Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998

    Demand Analysis with Partial Predicates

    Full text link
    In order to alleviate the inefficiencies caused by the interaction of the logic and functional sides, integrated languages may take advantage of \emph{demand} information -- i.e. knowing in advance which computations are needed and, to which extent, in a particular context. This work studies \emph{demand analysis} -- which is closely related to \emph{backwards strictness analysis} -- in a semantic framework of \emph{partial predicates}, which in turn are constructive realizations of ideals in a domain. This will allow us to give a concise, unified presentation of demand analysis, to relate it to other analyses based on abstract interpretation or strictness logics, some hints for the implementation, and, more important, to prove the soundness of our analysis based on \emph{demand equations}. There are also some innovative results. One of them is that a set constraint-based analysis has been derived in a stepwise manner using ideas taken from the area of program transformation. The other one is the possibility of using program transformation itself to perform the analysis, specially in those domains of properties where algorithms based on constraint solving are too weak.Comment: This is the extended version of a paper accepted for publication in a forthcoming special issue of Theory and Practice of Logic Programming on Multiparadigm and Constraint Programming (Falaschi and Maher, eds.) Appendices are missing in the printed versio

    Predicate Logic as a Modelling Language: The IDP System

    Full text link
    With the technology of the time, Kowalski's seminal 1974 paper {\em Predicate Logic as a Programming Language} was a breakthrough for the use of logic in computer science. It introduced two fundamental ideas: on the declarative side, the use of the Horn clause logic fragment of classical logic, which was soon extended with negation as failure, on the procedural side the procedural interpretation which made it possible to write algorithms in the formalism. Since then, strong progress was made both on the declarative understanding of the logic programming formalism and in automated reasoning technologies, particularly in SAT solving, Constraint Programming and Answer Set Programming. This has paved the way for the development of an extension of logic programming that embodies a more pure view of logic as a modelling language and its role for problem solving. In this paper, we present the \idp language and system. The language is essentially classical logic extended with one of logic programmings most important contributions to knowledge representation: the representation of complex definitions as rule sets under well-founded semantics. The system is a knowledge base system: a system in which complex declarative information is stored in a knowledge base which can be used to solve different computational problems by applying multiple forms of inference. In this view, theories are declarative modellings, bags of information, descriptions of possible states of affairs. They are neither procedures nor descriptions of computational problems. As such, the \idp language and system preserve the fundamental idea of a declarative reading of logic programs, while they break with the fundamental idea of the procedural interpretation of logic programs

    Transforming Proof Tableaux of Hoare Logic into Inference Sequences of Rewriting Induction

    Full text link
    A proof tableau of Hoare logic is an annotated program with pre- and post-conditions, which corresponds to an inference tree of Hoare logic. In this paper, we show that a proof tableau for partial correctness can be transformed into an inference sequence of rewriting induction for constrained rewriting. We also show that the resulting sequence is a valid proof for an inductive theorem corresponding to the Hoare triple if the constrained rewriting system obtained from the program is terminating. Such a valid proof with termination of the constrained rewriting system implies total correctness of the program w.r.t. the Hoare triple. The transformation enables us to apply techniques for proving termination of constrained rewriting to proving total correctness of programs together with proof tableaux for partial correctness.Comment: In Proceedings WPTE 2017, arXiv:1802.0586

    Relay: A High-Level Compiler for Deep Learning

    Full text link
    Frameworks for writing, compiling, and optimizing deep learning (DL) models have recently enabled progress in areas like computer vision and natural language processing. Extending these frameworks to accommodate the rapidly diversifying landscape of DL models and hardware platforms presents challenging tradeoffs between expressivity, composability, and portability. We present Relay, a new compiler framework for DL. Relay's functional, statically typed intermediate representation (IR) unifies and generalizes existing DL IRs to express state-of-the-art models. The introduction of Relay's expressive IR requires careful design of domain-specific optimizations, addressed via Relay's extension mechanisms. Using these extension mechanisms, Relay supports a unified compiler that can target a variety of hardware platforms. Our evaluation demonstrates Relay's competitive performance for a broad class of models and devices (CPUs, GPUs, and emerging accelerators). Relay's design demonstrates how a unified IR can provide expressivity, composability, and portability without compromising performance

    Learning implicitly in reasoning in PAC-Semantics

    Full text link
    We consider the problem of answering queries about formulas of propositional logic based on background knowledge partially represented explicitly as other formulas, and partially represented as partially obscured examples independently drawn from a fixed probability distribution, where the queries are answered with respect to a weaker semantics than usual -- PAC-Semantics, introduced by Valiant (2000) -- that is defined using the distribution of examples. We describe a fairly general, efficient reduction to limited versions of the decision problem for a proof system (e.g., bounded space treelike resolution, bounded degree polynomial calculus, etc.) from corresponding versions of the reasoning problem where some of the background knowledge is not explicitly given as formulas, only learnable from the examples. Crucially, we do not generate an explicit representation of the knowledge extracted from the examples, and so the "learning" of the background knowledge is only done implicitly. As a consequence, this approach can utilize formulas as background knowledge that are not perfectly valid over the distribution---essentially the analogue of agnostic learning here

    Inferring Acceptance and Rejection in Dialogue by Default Rules of Inference

    Full text link
    This paper discusses the processes by which conversants in a dialogue can infer whether their assertions and proposals have been accepted or rejected by their conversational partners. It expands on previous work by showing that logical consistency is a necessary indicator of acceptance, but that it is not sufficient, and that logical inconsistency is sufficient as an indicator of rejection, but it is not necessary. I show how conversants can use information structure and prosody as well as logical reasoning in distinguishing between acceptances and logically consistent rejections, and relate this work to previous work on implicature and default reasoning by introducing three new classes of rejection: {\sc implicature rejections}, {\sc epistemic rejections} and {\sc deliberation rejections}. I show how these rejections are inferred as a result of default inferences, which, by other analyses, would have been blocked by the context. In order to account for these facts, I propose a model of the common ground that allows these default inferences to go through, and show how the model, originally proposed to account for the various forms of acceptance, can also model all types of rejection.Comment: 37 pages, uses fullpage, lingmacros, name

    Markov Logic Networks for Natural Language Question Answering

    Full text link
    Our goal is to answer elementary-level science questions using knowledge extracted automatically from science textbooks, expressed in a subset of first-order logic. Given the incomplete and noisy nature of these automatically extracted rules, Markov Logic Networks (MLNs) seem a natural model to use, but the exact way of leveraging MLNs is by no means obvious. We investigate three ways of applying MLNs to our task. In the first, we simply use the extracted science rules directly as MLN clauses. Unlike typical MLN applications, our domain has long and complex rules, leading to an unmanageable number of groundings. We exploit the structure present in hard constraints to improve tractability, but the formulation remains ineffective. In the second approach, we instead interpret science rules as describing prototypical entities, thus mapping rules directly to grounded MLN assertions, whose constants are then clustered using existing entity resolution methods. This drastically simplifies the network, but still suffers from brittleness. Finally, our third approach, called Praline, uses MLNs to align the lexical elements as well as define and control how inference should be performed in this task. Our experiments, demonstrating a 15\% accuracy boost and a 10x reduction in runtime, suggest that the flexibility and different inference semantics of Praline are a better fit for the natural language question answering task.Comment: 7 pages, 1 figure, StarAI workshop at UAI'1

    Reasoning with Higher-Order Abstract Syntax in a Logical Framework

    Full text link
    Logical frameworks based on intuitionistic or linear logics with higher-type quantification have been successfully used to give high-level, modular, and formal specifications of many important judgments in the area of programming languages and inference systems. Given such specifications, it is natural to consider proving properties about the specified systems in the framework: for example, given the specification of evaluation for a functional programming language, prove that the language is deterministic or that evaluation preserves types. One challenge in developing a framework for such reasoning is that higher-order abstract syntax (HOAS), an elegant and declarative treatment of object-level abstraction and substitution, is difficult to treat in proofs involving induction. In this paper, we present a meta-logic that can be used to reason about judgments coded using HOAS; this meta-logic is an extension of a simple intuitionistic logic that admits higher-order quantification over simply typed lambda-terms (key ingredients for HOAS) as well as induction and a notion of definition. We explore the difficulties of formal meta-theoretic analysis of HOAS encodings by considering encodings of intuitionistic and linear logics, and formally derive the admissibility of cut for important subsets of these logics. We then propose an approach to avoid the apparent tradeoff between the benefits of higher-order abstract syntax and the ability to analyze the resulting encodings. We illustrate this approach through examples involving the simple functional and imperative programming languages PCF and PCF:=. We formally derive such properties as unicity of typing, subject reduction, determinacy of evaluation, and the equivalence of transition semantics and natural semantics presentations of evaluation.Comment: 56 pages, 21 tables; revised in light of reviewer comments; to appear in ACM Transactions on Computational Logi

    The Geometry of Types (Long Version)

    Full text link
    We show that time complexity analysis of higher-order functional programs can be effectively reduced to an arguably simpler (although computationally equivalent) verification problem, namely checking first-order inequalities for validity. This is done by giving an efficient inference algorithm for linear dependent types which, given a PCF term, produces in output both a linear dependent type and a cost expression for the term, together with a set of proof obligations. Actually, the output type judgement is derivable iff all proof obligations are valid. This, coupled with the already known relative completeness of linear dependent types, ensures that no information is lost, i.e., that there are no false positives or negatives. Moreover, the procedure reflects the difficulty of the original problem: simple PCF terms give rise to sets of proof obligations which are easy to solve. The latter can then be put in a format suitable for automatic or semi-automatic verification by external solvers. Ongoing experimental evaluation has produced encouraging results, which are briefly presented in the paper.Comment: 27 pages, 1 figur
    • …
    corecore