64 research outputs found

    A Survey of Paraphrasing and Textual Entailment Methods

    Full text link
    Paraphrasing methods recognize, generate, or extract phrases, sentences, or longer natural language expressions that convey almost the same information. Textual entailment methods, on the other hand, recognize, generate, or extract pairs of natural language expressions, such that a human who reads (and trusts) the first element of a pair would most likely infer that the other element is also true. Paraphrasing can be seen as bidirectional textual entailment and methods from the two areas are often similar. Both kinds of methods are useful, at least in principle, in a wide range of natural language processing applications, including question answering, summarization, text generation, and machine translation. We summarize key ideas from the two areas by considering in turn recognition, generation, and extraction methods, also pointing to prominent articles and resources.Comment: Technical Report, Natural Language Processing Group, Department of Informatics, Athens University of Economics and Business, Greece, 201

    Recognizing Textual Entailment Using Description Logic And Semantic Relatedness

    Get PDF
    Textual entailment (TE) is a relation that holds between two pieces of text where one reading the first piece can conclude that the second is most likely true. Accurate approaches for textual entailment can be beneficial to various natural language processing (NLP) applications such as: question answering, information extraction, summarization, and even machine translation. For this reason, research on textual entailment has attracted a significant amount of attention in recent years. A robust logical-based meaning representation of text is very hard to build, therefore the majority of textual entailment approaches rely on syntactic methods or shallow semantic alternatives. In addition, approaches that do use a logical-based meaning representation, require a large knowledge base of axioms and inference rules that are rarely available. The goal of this thesis is to design an efficient description logic based approach for recognizing textual entailment that uses semantic relatedness information as an alternative to large knowledge base of axioms and inference rules. In this thesis, we propose a description logic and semantic relatedness approach to textual entailment, where the type of semantic relatedness axioms employed in aligning the description logic representations are used as indicators of textual entailment. In our approach, the text and the hypothesis are first represented in description logic. The representations are enriched with additional semantic knowledge acquired by using the web as a corpus. The hypothesis is then merged into the text representation by learning semantic relatedness axioms on demand and a reasoner is then used to reason over the aligned representation. Finally, the types of axioms employed by the reasoner are used to learn if the text entails the hypothesis or not. To validate our approach we have implemented an RTE system named AORTE, and evaluated its performance on recognizing textual entailment using the fourth recognizing textual entailment challenge. Our approach achieved an accuracy of 68.8 on the two way task and 61.6 on the three way task which ranked the approach as 2nd when compared to the other participating runs in the same challenge. These results show that our description logical based approach can effectively be used to recognize textual entailment

    A Survey of GPT-3 Family Large Language Models Including ChatGPT and GPT-4

    Full text link
    Large language models (LLMs) are a special class of pretrained language models obtained by scaling model size, pretraining corpus and computation. LLMs, because of their large size and pretraining on large volumes of text data, exhibit special abilities which allow them to achieve remarkable performances without any task-specific training in many of the natural language processing tasks. The era of LLMs started with OpenAI GPT-3 model, and the popularity of LLMs is increasing exponentially after the introduction of models like ChatGPT and GPT4. We refer to GPT-3 and its successor OpenAI models, including ChatGPT and GPT4, as GPT-3 family large language models (GLLMs). With the ever-rising popularity of GLLMs, especially in the research community, there is a strong need for a comprehensive survey which summarizes the recent research progress in multiple dimensions and can guide the research community with insightful future research directions. We start the survey paper with foundation concepts like transformers, transfer learning, self-supervised learning, pretrained language models and large language models. We then present a brief overview of GLLMs and discuss the performances of GLLMs in various downstream tasks, specific domains and multiple languages. We also discuss the data labelling and data augmentation abilities of GLLMs, the robustness of GLLMs, the effectiveness of GLLMs as evaluators, and finally, conclude with multiple insightful future research directions. To summarize, this comprehensive survey paper will serve as a good resource for both academic and industry people to stay updated with the latest research related to GPT-3 family large language models.Comment: Preprint under review, 58 page

    Combined distributional and logical semantics

    Get PDF
    Understanding natural language sentences requires interpreting words, and combining the meanings of words into the meanings of sentences. Despite much work on lexical and compositional semantics individually, existing approaches are unlikely to offer a complete solution. This thesis introduces a new approach, which combines the benefits of distributional lexical semantics and logical compositional semantics. Linguistic theories of compositional semantics have shown how logical forms can be built for sentences, and how to represent semantic operators such as negatives, quantifiers and modals. However, computational implementations of such theories have shown poor performance on applications, mainly due to a reliance on incomplete hand-built ontologies for the meanings of content words. Conversely, distributional semantics has been shown to be effective in learning the representations of content words based on collocations in large unlabelled corpora, but there are major outstanding challenges in representing function words and building representations for sentences. I introduce a new model which captures the main advantages of logical and distributional approaches. The proposal closely follows formal semantics, except for changing the definitions of content words. In traditional formal semantics, each word would express a different symbol. Instead, I allow multiple words to express the same symbol, corresponding to underlying concepts. For example, both the verb write and the noun author can be made to express the same relation. These symbols can be learnt by clustering symbols based on distributional statistics—for example, write and author will share many similar arguments. Crucially, the clustering means that the representations are symbolic, so can easily be incorporated into standard logical approaches. The simple model proves insufficient, and I develop several extensions. I develop an unsupervised probabilistic model of ambiguity, and show how this model can be built into compositional derivations to produce a distribution over logical forms. The flat clustering approach does not model relations between concepts, for example that buying implies owning. Instead, I show how to build graph structures over the clusters, which allows such inferences. I also explore if the abstract concepts can be generalized cross-lingually, for example mapping French verb ecrire to the same cluster as the English verb write. The systems developed show good performance on question answering and entailment tasks, and are capable of both sophisticated multi-sentence inferences involving quantifiers, and subtle reasoning about lexical semantics. These results show that distributional and formal logical semantics are not mutually exclusive, and that a combined model can be built that captures the advantages of each

    Knowledge Reasoning with Graph Neural Networks

    Get PDF
    Knowledge reasoning is the process of drawing conclusions from existing facts and rules, which requires a range of capabilities including but not limited to understanding concepts, applying logic, and calibrating or validating architecture based on existing knowledge. With the explosive growth of communication techniques and mobile devices, much of collective human knowledge resides on the Internet today, in unstructured and semi-structured forms such as text, tables, images, videos, etc. It is overwhelmingly difficult for human to navigate the gigantic Internet knowledge without the help of intelligent systems such as search engines and question answering systems. To serve various information needs, in this thesis, we develop methods to perform knowledge reasoning over both structured and unstructured data. This thesis attempts to answer the following research questions on the topic of knowledge reasoning: (1) How to perform multi-hop reasoning over knowledge graphs? How should we leverage graph neural networks to learn graph-aware representations efficiently? And, how to systematically handle the noise in human questions? (2) How to combine deep learning and symbolic reasoning in a consistent probabilistic framework? How to make the inference efficient and scalable for large-scale knowledge graphs? Can we strike a balance between the representation power and the simplicity of the model? (3) What is the reasoning pattern of graph neural networks for knowledge-aware QA tasks? Can those elaborately designed GNN modules really perform complex reasoning process? Are they under- or over-complicated? Can we design a much simpler yet effective model to achieve comparable performance? (4) How to build an open-domain question answering system that can reason over multiple retrieved documents? How to efficiently rank and filter the retrieved documents to reduce the noise for the downstream answer prediction module? How to propagate and assemble the information among multiple retrieved documents? (5) How to answer the questions that require numerical reasoning over textual passages? How to enable pre-trained language models to perform numerical reasoning? We explored the research questions above and discovered that graph neural networks can be leveraged as a powerful tool for various knowledge reasoning tasks over both structured and unstructured knowledge sources. On structured graph-based knowledge source, we build graph neural networks on top of the graph structure to capture the topology information for downstream reasoning tasks. On unstructured text-based knowledge source, we first identify graph-structured information such as entity co-occurrence and entity-number binding, and then employ graph neural networks to reason over the constructed graphs, working together with pre-trained language models to handle unstructured part of the knowledge source.Ph.D

    Getting Past the Language Gap: Innovations in Machine Translation

    Get PDF
    In this chapter, we will be reviewing state of the art machine translation systems, and will discuss innovative methods for machine translation, highlighting the most promising techniques and applications. Machine translation (MT) has benefited from a revitalization in the last 10 years or so, after a period of relatively slow activity. In 2005 the field received a jumpstart when a powerful complete experimental package for building MT systems from scratch became freely available as a result of the unified efforts of the MOSES international consortium. Around the same time, hierarchical methods had been introduced by Chinese researchers, which allowed the introduction and use of syntactic information in translation modeling. Furthermore, the advances in the related field of computational linguistics, making off-the-shelf taggers and parsers readily available, helped give MT an additional boost. Yet there is still more progress to be made. For example, MT will be enhanced greatly when both syntax and semantics are on board: this still presents a major challenge though many advanced research groups are currently pursuing ways to meet this challenge head-on. The next generation of MT will consist of a collection of hybrid systems. It also augurs well for the mobile environment, as we look forward to more advanced and improved technologies that enable the working of Speech-To-Speech machine translation on hand-held devices, i.e. speech recognition and speech synthesis. We review all of these developments and point out in the final section some of the most promising research avenues for the future of MT

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    Metafictional anaphora:A comparison of different accounts

    Get PDF
    • …
    corecore