6 research outputs found

    Unsupervised Learning of Relational Entailment Graphs from Text

    Get PDF
    Recognizing textual entailment and paraphrasing is critical to many core natural language processing applications including question answering and semantic parsing. The surface form of a sentence that answers a question such as “Does Facebook own Instagram?” frequently does not directly correspond to the form of the question, but is rather a paraphrase or an expression such as “Facebook bought Instagram”, that entails the answer. Relational entailments (e.g., buys entails owns) are crucial for bridging the gap between queries and text resources. In this thesis, we describe different unsupervised approaches to construct relational entailment graphs, with typed relations (e.g., company buys company) as nodes and entailment as directed edges. The entailment graphs provide an explainable resource for downstream tasks such as question answering; however, the existing methods suffer from noise and sparsity inherent to the data. We extract predicate-argument structures from large multiple-source news corpora using a fast Combinatory Categorial Grammar parser. We compute entailment scores between relations based on the Distributional Inclusion Hypothesis which states that a word (relation) p entails another word (relation) q if and only if in any context that p can be used, q can be used in its place. The entailment scores are used to build local entailment graphs. We then build global entailment graphs by exploiting the dependencies between the entailment rules. Previous work has used transitivity constraints, but these constraints are intractable on large graphs. We instead propose a scalable method that learns globally consistent similarity scores based on new soft constraints that consider both the structures across typed entailment graphs and inside each graph. We show that our method significantly improves the entailment graphs. Additionally, we show the duality of entailment graph induction with the task of link prediction. The link prediction task infers missing relations between entities in an incomplete knowledge graph and discovers new facts. We present a new method in which link prediction on the knowledge graph of assertions extracted from raw text is used to improve entailment graphs which are learned from the same text. The entailment graphs are in turn used to improve the link prediction task. Finally, we define the contextual link prediction task that uses both the structure of the knowledge graph of assertions and their textual contexts. We fine-tune pre-trained language models with an unsupervised contextual link prediction objective. We augment the existing assertions with novel predictions of our model and use them to build higher quality entailment graphs. Similarly, we show that the entailment graphs improve the contextual link prediction task

    Temporal and Aspectual Entailment

    Get PDF
    Inferences regarding "Jane's arrival in London" from predications such as "Jane is going to London" or "Jane has gone to London" depend on tense and aspect of the predications. Tense determines the temporal location of the predication in the past, present or future of the time of utterance. The aspectual auxiliaries on the other hand specify the internal constituency of the event, i.e. whether the event of "going to London" is completed and whether its consequences hold at that time or not. While tense and aspect are among the most important factors for determining natural language inference, there has been very little work to show whether modern NLP models capture these semantic concepts. In this paper we propose a novel entailment dataset and analyse the ability of a range of recently proposed NLP models to perform inference on temporal predications. We show that the models encode a substantial amount of morphosyntactic information relating to tense and aspect, but fail to model inferences that require reasoning with these semantic properties.Comment: accepted at IWCS 201
    corecore