7,837 research outputs found
Query Evaluation in Deductive Databases
It is desirable to answer queries posed to deductive databases by computing fixpoints because such computations are directly amenable to set-oriented fact processing. However, the classical fixpoint procedures based on bottom-up processing — the naive and semi-naive methods — are rather primitive and often inefficient. In this article, we rely on bottom-up meta-interpretation for formalizing a new fixpoint procedure that performs a different kind of reasoning: We specify a top-down query answering method, which we call the Backward Fixpoint Procedure. Then, we reconsider query evaluation methods for recursive databases. First, we show that the methods based on rewriting on the one hand, and the methods based on resolution on the other hand, implement the Backward Fixpoint Procedure. Second, we interpret the rewritings of the Alexander and Magic Set methods as specializations of the Backward Fixpoint Procedure. Finally, we argue that such a rewriting is also needed in a database context for implementing efficiently the resolution-based methods. Thus, the methods based on rewriting and the methods based on resolution implement the same top-down evaluation of the original database rules by means of auxiliary rules processed bottom-up
Learning-assisted Theorem Proving with Millions of Lemmas
Large formal mathematical libraries consist of millions of atomic inference
steps that give rise to a corresponding number of proved statements (lemmas).
Analogously to the informal mathematical practice, only a tiny fraction of such
statements is named and re-used in later proofs by formal mathematicians. In
this work, we suggest and implement criteria defining the estimated usefulness
of the HOL Light lemmas for proving further theorems. We use these criteria to
mine the large inference graph of the lemmas in the HOL Light and Flyspeck
libraries, adding up to millions of the best lemmas to the pool of statements
that can be re-used in later proofs. We show that in combination with
learning-based relevance filtering, such methods significantly strengthen
automated theorem proving of new conjectures over large formal mathematical
libraries such as Flyspeck.Comment: journal version of arXiv:1310.2797 (which was submitted to LPAR
conference
A large annotated corpus for learning natural language inference
Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time.Comment: To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli
Connectionist Inference Models
The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling
Recommended from our members
Coordinating visualizations of polysemous action: Values added for grounding proportion
We contribute to research on visualization as an epistemic learning tool by inquiring into the didactical potential of having students visualize one phenomenon in accord with two different partial meanings of the same concept. 22 Grade 4-6 students participated in a design study that investigated the emergence of proportional-equivalence notions from mediated perceptuomotor schemas. Working as individuals or pairs in tutorial clinical interviews, students solved non-symbolic interaction problems that utilized remote-sensing technology. Next, they used symbolic artifacts interpolated into the problem space as semiotic means to objectify in mathematical register a variety of both additive and multiplicative solution strategies. Finally, they reflected on tensions between these competing visualizations of the space. Micro-ethnographic analyses of episodes from three paradigmatic case studies suggest that students reconciled semiotic conflicts by generating heuristic logico-mathematical inferences that integrated competing meanings into cohesive conceptual networks. These inferences hinged on revisualizing additive elements multiplicatively. Implications are drawn for rethinking didactical design for proportions. © 2013 FIZ Karlsruhe
Compositional Vector Space Models for Knowledge Base Completion
Knowledge base (KB) completion adds new facts to a KB by making inferences
from existing facts, for example by inferring with high likelihood
nationality(X,Y) from bornIn(X,Y). Most previous methods infer simple one-hop
relational synonyms like this, or use as evidence a multi-hop relational path
treated as an atomic feature, like bornIn(X,Z) -> containedIn(Z,Y). This paper
presents an approach that reasons about conjunctions of multi-hop relations
non-atomically, composing the implications of a path using a recursive neural
network (RNN) that takes as inputs vector embeddings of the binary relation in
the path. Not only does this allow us to generalize to paths unseen at training
time, but also, with a single high-capacity RNN, to predict new relation types
not seen when the compositional model was trained (zero-shot learning). We
assemble a new dataset of over 52M relational triples, and show that our method
improves over a traditional classifier by 11%, and a method leveraging
pre-trained embeddings by 7%.Comment: The 53rd Annual Meeting of the Association for Computational
Linguistics and The 7th International Joint Conference of the Asian
Federation of Natural Language Processing, 201
- …