1,117 research outputs found

    Interpretations of Presburger Arithmetic in Itself

    Full text link
    Presburger arithmetic PrA is the true theory of natural numbers with addition. We study interpretations of PrA in itself. We prove that all one-dimensional self-interpretations are definably isomorphic to the identity self-interpretation. In order to prove the results we show that all linear orders that are interpretable in (N,+) are scattered orders with the finite Hausdorff rank and that the ranks are bounded in terms of the dimension of the respective interpretations. From our result about self-interpretations of PrA it follows that PrA isn't one-dimensionally interpretable in any of its finite subtheories. We note that the latter was conjectured by A. Visser.Comment: Published in proceedings of LFCS 201

    Response Characterization for Auditing Cell Dynamics in Long Short-term Memory Networks

    Full text link
    In this paper, we introduce a novel method to interpret recurrent neural networks (RNNs), particularly long short-term memory networks (LSTMs) at the cellular level. We propose a systematic pipeline for interpreting individual hidden state dynamics within the network using response characterization methods. The ranked contribution of individual cells to the network's output is computed by analyzing a set of interpretable metrics of their decoupled step and sinusoidal responses. As a result, our method is able to uniquely identify neurons with insightful dynamics, quantify relationships between dynamical properties and test accuracy through ablation analysis, and interpret the impact of network capacity on a network's dynamical distribution. Finally, we demonstrate generalizability and scalability of our method by evaluating a series of different benchmark sequential datasets

    Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach

    Full text link
    Knowledge bases are employed in a variety of applications from natural language processing to semantic web search; alas, in practice their usefulness is hurt by their incompleteness. Embedding models attain state-of-the-art accuracy in knowledge base completion, but their predictions are notoriously hard to interpret. In this paper, we adapt "pedagogical approaches" (from the literature on neural networks) so as to interpret embedding models by extracting weighted Horn rules from them. We show how pedagogical approaches have to be adapted to take upon the large-scale relational aspects of knowledge bases and show experimentally their strengths and weaknesses.Comment: presented at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Swede

    Finding Streams in Knowledge Graphs to Support Fact Checking

    Full text link
    The volume and velocity of information that gets generated online limits current journalistic practices to fact-check claims at the same rate. Computational approaches for fact checking may be the key to help mitigate the risks of massive misinformation spread. Such approaches can be designed to not only be scalable and effective at assessing veracity of dubious claims, but also to boost a human fact checker's productivity by surfacing relevant facts and patterns to aid their analysis. To this end, we present a novel, unsupervised network-flow based approach to determine the truthfulness of a statement of fact expressed in the form of a (subject, predicate, object) triple. We view a knowledge graph of background information about real-world entities as a flow network, and knowledge as a fluid, abstract commodity. We show that computational fact checking of such a triple then amounts to finding a "knowledge stream" that emanates from the subject node and flows toward the object node through paths connecting them. Evaluation on a range of real-world and hand-crafted datasets of facts related to entertainment, business, sports, geography and more reveals that this network-flow model can be very effective in discerning true statements from false ones, outperforming existing algorithms on many test cases. Moreover, the model is expressive in its ability to automatically discover several useful path patterns and surface relevant facts that may help a human fact checker corroborate or refute a claim.Comment: Extended version of the paper in proceedings of ICDM 201

    The Small-Is-Very-Small Principle

    Full text link
    The central result of this paper is the small-is-very-small principle for restricted sequential theories. The principle says roughly that whenever the given theory shows that a property has a small witness, i.e. a witness in every definable cut, then it shows that the property has a very small witness: i.e. a witness below a given standard number. We draw various consequences from the central result. For example (in rough formulations): (i) Every restricted, recursively enumerable sequential theory has a finitely axiomatized extension that is conservative w.r.t. formulas of complexity ≤n\leq n. (ii) Every sequential model has, for any nn, an extension that is elementary for formulas of complexity ≤n\leq n, in which the intersection of all definable cuts is the natural numbers. (iii) We have reflection for Σ20\Sigma^0_2-sentences with sufficiently small witness in any consistent restricted theory UU. (iv) Suppose UU is recursively enumerable and sequential. Suppose further that every recursively enumerable and sequential VV that locally inteprets UU, globally interprets UU. Then, UU is mutually globally interpretable with a finitely axiomatized sequential theory. The paper contains some careful groundwork developing partial satisfaction predicates in sequential theories for the complexity measure depth of quantifier alternations

    On the alleged simplicity of impure proof

    Get PDF
    Roughly, a proof of a theorem, is “pure” if it draws only on what is “close” or “intrinsic” to that theorem. Mathematicians employ a variety of terms to identify pure proofs, saying that a pure proof is one that avoids what is “extrinsic,” “extraneous,” “distant,” “remote,” “alien,” or “foreign” to the problem or theorem under investigation. In the background of these attributions is the view that there is a distance measure (or a variety of such measures) between mathematical statements and proofs. Mathematicians have paid little attention to specifying such distance measures precisely because in practice certain methods of proof have seemed self- evidently impure by design: think for instance of analytic geometry and analytic number theory. By contrast, mathematicians have paid considerable attention to whether such impurities are a good thing or to be avoided, and some have claimed that they are valuable because generally impure proofs are simpler than pure proofs. This article is an investigation of this claim, formulated more precisely by proof- theoretic means. After assembling evidence from proof theory that may be thought to support this claim, we will argue that on the contrary this evidence does not support the claim
    • …
    corecore