54 research outputs found

    A Survey of Paraphrasing and Textual Entailment Methods

    Full text link
    Paraphrasing methods recognize, generate, or extract phrases, sentences, or longer natural language expressions that convey almost the same information. Textual entailment methods, on the other hand, recognize, generate, or extract pairs of natural language expressions, such that a human who reads (and trusts) the first element of a pair would most likely infer that the other element is also true. Paraphrasing can be seen as bidirectional textual entailment and methods from the two areas are often similar. Both kinds of methods are useful, at least in principle, in a wide range of natural language processing applications, including question answering, summarization, text generation, and machine translation. We summarize key ideas from the two areas by considering in turn recognition, generation, and extraction methods, also pointing to prominent articles and resources.Comment: Technical Report, Natural Language Processing Group, Department of Informatics, Athens University of Economics and Business, Greece, 201

    Extraction of Natural-Language Dates and Comparison of Dates in Hypothesis and Text to Identify Negative Textual Entailment

    Get PDF
    We present a system to determine entailment, when a sentence (the text) implies a second sentence (the hypothesis). While some systems use temporal information to decide entailment, no study has measured the effectiveness of temporal features alone in entailment resolution. The system identifies natural-language dates, and precludes entailment solely by comparing dates in the hypothesis and text. Evaluation of date detection on three 800-pair corpora from the Recognising Textual Entailment (RTE) Challenges provided precision of 98.0% (RTE3devmt, 338/345), 96.6% (RTE3test, 198/205), and 99.7% (RTE1test, 336/337), and recall of 97.7% (RTE3devmt, 338/346), 99.0% (RTE3test, 198/200), and 99.1% (RTE1devmt 336/339). For sentence pairs with years, the proposed method improved entailment accuracy from 33 to 42/72 (RTE3devmt) and from 42 to 44/63 (RTE3test,) which corresponds to an overall improvement of 1.1% (RTE3devmt) and 0.1% (RTE3test). Our analysis suggests that matching temporal information with an event would further increase the entailment accuracy

    Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning

    Full text link
    Parameter-efficient fine-tuning (PEFT) of pre-trained language models (PLMs) has emerged as a highly successful approach, with training only a small number of parameters without sacrificing performance and becoming the de-facto learning paradigm with the increasing size of PLMs. However, existing PEFT methods are not memory-efficient, because they still require caching most of the intermediate activations for the gradient calculation, akin to fine-tuning. One effective way to reduce the activation memory is to apply a reversible model, so the intermediate activations are not necessary to be cached and can be recomputed. Nevertheless, modifying a PLM to its reversible variant with PEFT is not straightforward, since the reversible model has a distinct architecture from the currently released PLMs. In this paper, we first investigate what is a key factor for the success of existing PEFT methods, and realize that it's essential to preserve the PLM's starting point when initializing a PEFT method. With this finding, we propose memory-efficient fine-tuning (MEFT) that inserts adapters into a PLM, preserving the PLM's starting point and making it reversible without additional pre-training. We evaluate MEFT on the GLUE benchmark and five question-answering tasks with various backbones, BERT, RoBERTa, BART and OPT. MEFT significantly reduces the activation memory up to 84% of full fine-tuning with a negligible amount of trainable parameters. Moreover, MEFT achieves the same score on GLUE and a comparable score on the question-answering tasks as full fine-tuning.Comment: Code at https://github.com/BaohaoLiao/meft

    Recognizing Textual Entailment Using Description Logic And Semantic Relatedness

    Get PDF
    Textual entailment (TE) is a relation that holds between two pieces of text where one reading the first piece can conclude that the second is most likely true. Accurate approaches for textual entailment can be beneficial to various natural language processing (NLP) applications such as: question answering, information extraction, summarization, and even machine translation. For this reason, research on textual entailment has attracted a significant amount of attention in recent years. A robust logical-based meaning representation of text is very hard to build, therefore the majority of textual entailment approaches rely on syntactic methods or shallow semantic alternatives. In addition, approaches that do use a logical-based meaning representation, require a large knowledge base of axioms and inference rules that are rarely available. The goal of this thesis is to design an efficient description logic based approach for recognizing textual entailment that uses semantic relatedness information as an alternative to large knowledge base of axioms and inference rules. In this thesis, we propose a description logic and semantic relatedness approach to textual entailment, where the type of semantic relatedness axioms employed in aligning the description logic representations are used as indicators of textual entailment. In our approach, the text and the hypothesis are first represented in description logic. The representations are enriched with additional semantic knowledge acquired by using the web as a corpus. The hypothesis is then merged into the text representation by learning semantic relatedness axioms on demand and a reasoner is then used to reason over the aligned representation. Finally, the types of axioms employed by the reasoner are used to learn if the text entails the hypothesis or not. To validate our approach we have implemented an RTE system named AORTE, and evaluated its performance on recognizing textual entailment using the fourth recognizing textual entailment challenge. Our approach achieved an accuracy of 68.8 on the two way task and 61.6 on the three way task which ranked the approach as 2nd when compared to the other participating runs in the same challenge. These results show that our description logical based approach can effectively be used to recognize textual entailment

    Automatic Identification of Paraphrases

    Get PDF
    Automatické získávání parafrází je důležitou úlohou v oblasti zpracování přirozeného jazyka. Uplatnění nalezne v systémech provádějících odpovídání na otázky, získávání informací nebo shrnutí dokumentů. Tato práce má za úkol seznámit čtenáře s problematikou získávání parafrází a následně vytvořit systém, který z volného textu parafráze získává. Práce nejprve vysvětlí hlavní pojmy v této oblasti, jako jsou parafráze nebo parafrázové vzory. Dále shrne přístupy k získávání parafrází z různých zdrojů. V další části je popsán návrh systému, který je zaměřen na získávání parafrází mezi dvěma pojmenovanými entitami. Na závěr jsou popsány metody vyhodnocování těchto systémů a je provedeno vyhodnocení našeho systému a jeho srovnání s podobnými systémy.Automatic paraphrase discovery is an important task in natural language processing. Many systems use paraphrases for improve performance e.g. systems for question answering, information retrieval or document summarization. In this thesis, we explain basic concepts e.g. paraphrase or paraphrase pattern. Next we propose some methods for paraphrase discovery from various resources. Subsequently we propose an unsupervised method for discovering paraphrase from large plain text based on context and keywords between NE pairs. In the end we explain evaluation metods in paraphrase discovery area and then we evaluate our system and compare it with similar systems.

    Word meaning in context : a probabilistic model and its application to question answering

    Get PDF
    The need for assessing similarity in meaning is central to most language technology applications. Distributional methods are robust, unsupervised methods which achieve high performance on this task. These methods measure similarity of word types solely based on patterns of word occurrences in large corpora, following the intuition that similar words occur in similar contexts. As most Natural Language Processing (NLP) applications deal with disambiguated words, words occurring in context, rather than word types, the question of adapting distributional methods to compute sense-specific or context-sensitive similarities has gained increasing attention in recent work. This thesis focuses on the development and applications of distributional methods for context-sensitive similarity. The contribution made is twofold: the main part of the thesis proposes and tests a new framework for computing similarity in context, while the second part investigates the application of distributional paraphrasing to the task of question answering.Die Notwendigkeit der Beurteilung von Bedeutungsähnlichkeit spielt für die meisten sprachtechnologische Anwendungen eine wesentliche Rolle. Distributionelle Verfahren sind solide, unbeaufsichtigte Verfahren, die für diese Aufgabe sehr effektiv sind. Diese Verfahren messen die Ähnlichkeit von Wortarten lediglich auf Basis von Mustern, nach denen die Wörter in großen Korpora vorkommen, indem sie der Erkenntnis folgen, dass ähnliche Wörter in ähnlichen Kontexten auftreten. Da die meisten Anwendungen im Natural Language Processing (NLP) mit eindeutigen Wörtern arbeiten, also eher Wörtern, die im Kontext vorkommen, als Wortarten, hat die Frage, ob distributionelle Verfahren angepasst werden sollten, um bedeutungsspezifische oder kontextabhängige Ähnlichkeiten zu berechnen, in neueren Arbeiten zunehmend an Bedeutung gewonnen. Diese Dissertation konzentriert sich auf die Entwicklung und Anwendungen von distributionellen Verfahren für kontextabhängige Ähnlichkeit und liefert einen doppelten Beitrag: Den Hauptteil der Arbeit bildet die Präsentation und Erprobung eines neuen framework für die Berechnung von Ähnlichkeit im Kontext. Im zweiten Teil der Arbeit wird die Anwendung des distributional paraphrasing auf die Aufgabe der Fragenbeantwortung untersucht

    The Detection of Contradictory Claims in Biomedical Abstracts

    Get PDF
    Research claims in the biomedical domain are not always consistent, and may even be contradictory. This thesis explores contradictions between research claims in order to determine whether or not it is possible to develop a solution to automate the detection of such phenomena. Such a solution will help decision-makers, including researchers, to alleviate the effects of contradictory claims on their decisions. This study develops two methodologies to construct corpora of contradictions. The first methodology utilises systematic reviews to construct a manually-annotated corpus of contradictions. The second methodology uses a different approach to construct a corpus of contradictions which does not rely on human annotation. This methodology is proposed to overcome the limitations of the manual annotation approach. Moreover, this thesis proposes a pipeline to detect contradictions in abstracts. The pipeline takes a question and a list of research abstracts which may contain answers to it. The output of the pipeline is a list of sentences extracted from abstracts which answer the question, where each sentence is annotated with an assertion value with respect to the question. Claims which feature opposing assertion values are considered as potentially contradictory claims. The research demonstrates that automating the detection of contradictory claims in research abstracts is a feasible problem
    corecore