322 research outputs found

    A large annotated corpus for learning natural language inference

    Full text link
    Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.Comment: To appear at EMNLP 2015. The data will be posted shortly before the conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli

    A Natural Proof System for Natural Language

    Get PDF

    A Survey of Paraphrasing and Textual Entailment Methods

    Full text link
    Paraphrasing methods recognize, generate, or extract phrases, sentences, or longer natural language expressions that convey almost the same information. Textual entailment methods, on the other hand, recognize, generate, or extract pairs of natural language expressions, such that a human who reads (and trusts) the first element of a pair would most likely infer that the other element is also true. Paraphrasing can be seen as bidirectional textual entailment and methods from the two areas are often similar. Both kinds of methods are useful, at least in principle, in a wide range of natural language processing applications, including question answering, summarization, text generation, and machine translation. We summarize key ideas from the two areas by considering in turn recognition, generation, and extraction methods, also pointing to prominent articles and resources.Comment: Technical Report, Natural Language Processing Group, Department of Informatics, Athens University of Economics and Business, Greece, 201

    Analysis of Identifying Linguistic Phenomena for Recognizing Inference in Text

    Get PDF
    [[abstract]]Recognizing Textual Entailment (RTE) is a task in which two text fragments are processed by system to determine whether the meaning of hypothesis is entailed from another text or not. Although a considerable number of studies have been made on recognizing textual entailment, little is known about the power of linguistic phenomenon for recognizing inference in text. The objective of this paper is to provide a comprehensive analysis of identifying linguistic phenomena for recognizing inference in text (RITE). In this paper, we focus on RITE-VAL System Validation subtask and propose a model by using an analysis of identifying linguistic phenomena for Recognizing Inference in Text (RITE) using the development dataset of NTCIR-11 RITE-VAL subtask. The experimental results suggest that well identified linguistic phenomenon category could enhance the accuracy of textual entailment system.[[sponsorship]]IEEE[[incitationindex]]EI[[conferencetype]]國際[[conferencedate]]20140813~20140815[[booktype]]電子版[[iscallforpapers]]Y[[conferencelocation]]San Francisco, California, US

    Benchmarking for syntax-based sentential inference

    Get PDF
    International audienceWe propose a methodology for investigat- ing how well NLP systems handle mean- ing preserving syntactic variations. We start by presenting a method for the semi automated creation of a benchmark where entailment is mediated solely by meaning preserving syntactic variations. We then use this benchmark to compare a seman- tic role labeller and two grammar based RTE systems. We argue that the proposed methodology (i) supports a modular eval- uation of the ability of NLP systems to handle the syntax/semantic interface and (ii) permits focused error mining and er- ror analysis

    Linguistic redundancy in Twitter

    Get PDF
    In the last few years, the interest of the research community in micro-blogs and social media services, such as Twitter, is growing exponentially. Yet, so far not much attention has been paid on a key characteristic of micro-blogs: the high level of information redundancy. The aim of this paper is to systematically approach this problem by providing an operational definition of redundancy. We cast redundancy in the framework of Textual En-tailment Recognition. We also provide quantitative evidence on the pervasiveness of redundancy in Twitter, and describe a dataset of redundancy-annotated tweets. Finally, we present a general purpose system for identifying redundant tweets. An extensive quantitative evaluation shows that our system successfully solves the redundancy detection task, improving over baseline systems with statistical significance
    corecore