9 research outputs found

    Defining Textual Entailment

    Get PDF
    Textual entailment is a relationship that obtains between fragments of text when one fragment in some sense implies the other fragment. The automation of textual entailment recognition supports a wide variety of text-based tasks, including information retrieval, information extraction, question answering, text summarization, and machine translation. Much ingenuity has been devoted to developing algorithms for identifying textual entailments, but relatively little to saying what textual entailment actually is. This article is a review of the logical and philosophical issues involved in providing an adequate definition of textual entailment. We show that many natural definitions of textual entailment are refuted by counterexamples, including the most widely cited definition of Dagan et al. We then articulate and defend the following revised definition: T textually entails H = df typically, a human reading T would be justified in inferring the proposition expressed by H from the proposition expressed by T. We also show that textual entailment is context-sensitive, nontransitive, and nonmonotonic

    Continuous Prompt Tuning Based Textual Entailment Model for E-commerce Entity Typing

    Full text link
    The explosion of e-commerce has caused the need for processing and analysis of product titles, like entity typing in product titles. However, the rapid activity in e-commerce has led to the rapid emergence of new entities, which is difficult to be solved by general entity typing. Besides, product titles in e-commerce have very different language styles from text data in general domain. In order to handle new entities in product titles and address the special language styles problem of product titles in e-commerce domain, we propose our textual entailment model with continuous prompt tuning based hypotheses and fusion embeddings for e-commerce entity typing. First, we reformulate the entity typing task into a textual entailment problem to handle new entities that are not present during training. Second, we design a model to automatically generate textual entailment hypotheses using a continuous prompt tuning method, which can generate better textual entailment hypotheses without manual design. Third, we utilize the fusion embeddings of BERT embedding and CharacterBERT embedding with a two-layer MLP classifier to solve the problem that the language styles of product titles in e-commerce are different from that of general domain. To analyze the effect of each contribution, we compare the performance of entity typing and textual entailment model, and conduct ablation studies on continuous prompt tuning and fusion embeddings. We also evaluate the impact of different prompt template initialization for the continuous prompt tuning. We show our proposed model improves the average F1 score by around 2% compared to the baseline BERT entity typing model

    A Hybrid Siamese Neural Network for Natural Language Inference in Cyber-Physical Systems

    Get PDF
    Cyber-Physical Systems (CPS), as a multi-dimensional complex system that connects the physical world and the cyber world, has a strong demand for processing large amounts of heterogeneous data. These tasks also include Natural Language Inference (NLI) tasks based on text from different sources. However, the current research on natural language processing in CPS does not involve exploration in this field. Therefore, this study proposes a Siamese Network structure that combines Stacked Residual Long Short-Term Memory (bidirectional) with the Attention mechanism and Capsule Network for the NLI module in CPS, which is used to infer the relationship between text/language data from different sources. This model is mainly used to implement NLI tasks and conduct a detailed evaluation in three main NLI benchmarks as the basic semantic understanding module in CPS. Comparative experiments prove that the proposed method achieves competitive performance, has a certain generalization ability, and can balance the performance and the number of trained parameters

    Une version française de la ressource FraCaS

    Get PDF
    International audienceThis paper presents a French version of the FraCaS test suite. This test suite, originally written in English, contains problems illustrating semantic inference in natural language. We describe linguistic choices we had to make when translating the FraCaS test suite in French, and discuss some of the issues that were raised by the translation. We also report an experiment we ran in order to test both the translation and the logical semantics underlying the problems of the test suite. This provides a way of checking formal semanticists' hypotheses against actual semantic capacity of speakers (in the present case, French speakers), and allow us to compare the results we obtained with the ones of similar experiments that have been conducted for other languages

    Cross-Lingual Textual Entailment and Applications

    Get PDF
    Textual Entailment (TE) has been proposed as a generic framework for modeling language variability. The great potential of integrating (monolingual) TE recognition components into NLP architectures has been reported in several areas, such as question answering, information retrieval, information extraction and document summarization. Mainly due to the absence of cross-lingual TE (CLTE) recognition components, similar improvements have not yet been achieved in any corresponding cross-lingual application. In this thesis, we propose and investigate Cross-Lingual Textual Entailment (CLTE) as a semantic relation between two text portions in dierent languages. We present dierent practical solutions to approach this problem by i) bringing CLTE back to the monolingual scenario, translating the two texts into the same language; and ii) integrating machine translation and TE algorithms and techniques. We argue that CLTE can be a core tech- nology for several cross-lingual NLP applications and tasks. Experiments on dierent datasets and two interesting cross-lingual NLP applications, namely content synchronization and machine translation evaluation, conrm the eectiveness of our approaches leading to successful results. As a complement to the research in the algorithmic side, we successfully explored the creation of cross-lingual textual entailment corpora by means of crowdsourcing, as a cheap and replicable data collection methodology that minimizes the manual work done by expert annotators

    Re-BERT OQA : un système de question-réponse dans le domaine ouvert

    Get PDF
    RÉSUMÉ : Dans le présent mémoire, nous abordons la tâche de question-réponse dans le domaine ouvert, c’est-à-dire la tâche qui a pour but de répondre à une question en utilisant son corpus de connaissances (qu’il soit structuré ou non) comme seule ressource. Plus spécifiquement notre but est de proposer un système de question-réponse dans le domaine ouvert capable de répondre à des questions factuelles en utilisant Wikipédia comme corpus de connaissances. En général, ce genre de système se divise en deux modules. Le premier, responsable de la recherche d’information, permet de trouver des documents pertinents dans le corpus de connaissance. Le second, le module d’extraction de réponse, a pour objectif d’extraire des candidats de réponse provenant des documents précédemment sélectionnés puis de déterminer une réponse finale parmi les candidats. Dans les dernières années, les avancées dans le domaine de la compréhension de lecture automatique ont été une grande source d’inspiration pour le module d’extraction résultant en la création des systèmes de question-réponse dans le domaine ouvert les plus efficaces à ce jour.----------ABSTRACT : In this thesis, we tackle the Open Domain Question-Answering task, where the goal is to be able to answer a question using a knowledge source (either structured like DBpedia or unstructured such as Wikipedia). Specifically, our goal is to propose an open domain questionanswering system capable of answering factoid questions using Wikipedia as knowledge source. In general, these types of systems are divided in two sub-modules. The first one, responsible of the information retrieval step, enables the system to find relevant documents in its knowledge source. The second, the answer extraction module, extracts answer candidates from the previously selected documents and then determines the final answer within the candidates. In recent years, the progress achieved in the machine reading comprehension field has driven the development of improved answer extraction modules resulting in the creation of the best open domain question answering systems to date
    corecore