28 research outputs found

    Generating ellipsis using discourse structures

    Get PDF
    This article describes an effort to generate elliptic sentences, using Dependency Trees connected by Discourse Relations as input. We contend that the process of syntactic aggregation should be performed in the Surface Realization stage of the language generation process, and that Dependency Trees with Rhetorical Relations are excellent input for a generation system that has to generate ellipsis. We also propose a taxonomy of the most common Dutch cue words, grouped according to the kind of discourse relations they signal

    Abstract syntax as interlingua: Scaling up the grammatical framework from controlled languages to robust pipelines

    Get PDF
    Syntax is an interlingual representation used in compilers. Grammatical Framework (GF) applies the abstract syntax idea to natural languages. The development of GF started in 1998, first as a tool for controlled language implementations, where it has gained an established position in both academic and commercial projects. GF provides grammar resources for over 40 languages, enabling accurate generation and translation, as well as grammar engineering tools and components for mobile and Web applications. On the research side, the focus in the last ten years has been on scaling up GF to wide-coverage language processing. The concept of abstract syntax offers a unified view on many other approaches: Universal Dependencies, WordNets, FrameNets, Construction Grammars, and Abstract Meaning Representations. This makes it possible for GF to utilize data from the other approaches and to build robust pipelines. In return, GF can contribute to data-driven approaches by methods to transfer resources from one language to others, to augment data by rule-based generation, to check the consistency of hand-annotated corpora, and to pipe analyses into high-precision semantic back ends. This article gives an overview of the use of abstract syntax as interlingua through both established and emerging NLP applications involving GF

    A formalism for machine translation in MTT, including syntactic restructurings

    Get PDF
    This paper presents a new formalisation of the transfer-based translation model of the Meaning-Text Theory. Our modelling is based on polarized correspondence grammars and observes a strict separation between the monolingual models, a minimal bilingual lexicon and universal restructuring rules, directly associated with syntactic lexical functions

    Traduction, restructurations syntaxiques et grammaires de correspondance

    Get PDF
    Cet article présente une nouvelle formalisation du modèle de traduction par transfert de la Théorie Sens-Texte. Notre modélisation utilise les grammaires de correspondance polarisées et fait une stricte séparation entre les modèles monolingues, un lexique bilingue minimal et des règles de restructuration universelles, directement associées aux fonctions lexicales syntaxiques. Abstract This paper presents a new formalisation of transfer-based translation model of the Meaning-Text Theory. Our modelling is based on polarized correspondence grammars and observes a strict separation between monolingual models, the bilingual lexicon and universal restructuring rules, directly associated with syntactic lexical functions

    ANTELOPE - Une plateforme industrielle de traitement linguistique

    Get PDF
    International audienceThe Antelope linguistic platform, inspired by Meaning-Text Theory, targets the syntactic and semantic analysis of texts, and can handle large corpora. Antelope integrates several pre-existing (parsing) components as well as broad-coverage linguistic data originating from various sources. Efforts towards integration of all components nonetheless make for a homogeneous platform. Our direct contribution deals with components for semantic analysis, and the formalization of a unified text analysis model. This paper introduces the platform and compares it with state-of-the-art projects. It offers to the NLP community a feedback from a software company, by underlining the architectural measures that should be taken to ensure that such complex software remains maintainable.La plate-forme de traitement linguistique Antelope, en partie basée sur la Théorie Sens-Texte (TST), permet l'analyse syntaxique et sémantique de textes sur des corpus de volume important. Antelope intègre plusieurs composants préexistants (pour l'analyse syntaxique) ainsi que des données linguistiques à large couverture provenant de différentes sources. Un effort d'intégration permet néanmoins d'offrir une plate-forme homogène. Notre contribution directe concerne l'ajout de composants d'analyse sémantique et la formalisation d'un modèle linguistique unifié. Cet article présente la plate-forme et la compare à d'autres projets de référence. Il propose un retour d'expérience d'un éditeur de logiciel vers la communauté du TAL, en soulignant les précautions architecturales à prendre pour qu'un tel ensemble complexe reste maintenable

    Multi-Objective Learning for Multi-Modal Natural Language Generation

    Get PDF
    One of the important goals of Artificial Intelligence (AI) is to mimic the ability of humans to leverage the knowledge or skill from previously learned tasks to quickly learn a new task. For example, humans can reapply the learned skill of balancing the bicycle for learning to ride a motorbike. In a similar context, the field of Natural Language Processing (NLP) has several tasks including machine translation, textual summarization, image/video captioning, sentiment analysis, dialog systems, natural language inference, question answering, etc. While these different NLP tasks are often trained separately, leveraging the knowledge or skill from related tasks via joint training or training one task after another task in a sequential fashion, can have potential advantages. To this end, this dissertation explores various NLP tasks (especially multi-modal text generation and pair-wise classification tasks covering both natural language generation (NLG) and natural language understanding (NLU)) leveraging information from the related auxiliary tasks in an effective way via novel multi-objective learning strategies. These proposed novel learning strategies can be broadly classified into three paradigms: multi-task learning, multi-reward reinforcement learning, and continual learning. In multi-task learning, we mainly focus on intuitively finding what related auxiliary tasks can benefit the multi-modal video caption generation task and textual summarization task. We explore effective ways of sharing the parameters across these related tasks via joint training. In multi-reward reinforcement learning, we teach various skills to multi-modal text generation models in the form of rewards. For example, we try to teach the entailment skill to the video captioning model with entailment rewards. Further, we propose novel and effective ways of inducing multiple skills by `dynamically' choosing the auxiliary tasks (in MTL) or rewards (in RL) during the training in an automatic way using multi-armed bandits based approaches. Finally, in continual learning, we explore sharing of information across various tasks in a sequential way, where the model continually evolves during the sequential training without losing the performance on previously learned tasks. This kind of sharing allows the later tasks to benefit from previously trained tasks and vice-versa in some cases. For this, we propose a novel method that continually changes the model architecture to accommodate new tasks while retaining performance on old tasks. We empirically evaluate our method on three natural language inference tasks.Doctor of Philosoph

    Automated Translation with Interlingual Word Representations

    Get PDF

    Automated Translation with Interlingual Word Representations

    Get PDF
    In dit proefschrift onderzoeken we het gebruik vertaalsystemen die gebruiken maken van een transferfase met interlinguale representaties van woorden. Op deze manier benaderen we het probleem van de lexicale ambiguĂŻteit in de automatische vertaalsystemen als twee afzonderlijke taken: het bepalen van woordbetekenis en lexicale selectie. Eerst worden de woorden in de brontaal op basis van hun betekenis gedesambigueerd, resulterend in interlinguale representaties van woorden. Vervolgens wordt een lexicale selectiemodule gebruikt die het meest geschikte woord in de doeltaal selecteert. We geven een gedetailleerde beschrijving van de ontwikkeling en evaluatie van vertaalsystemen voor Nederlands-Engels. Dit biedt een achtergrond voor de experimenten in het tweede en derde deel van dit proefschrift. Daarna beschrijven we een methode die de betekenis van woorden bepaalt. Deze is vergelijkbaar met het klassieke Lesk-algoritme, omdat het gebruik maakt van het idee dat gedeelde woorden tussen de context van een woord en zijn definitie informatie over de betekenis ervan verschaffen. Wij gebruiken echter, in plaats daarvan, woord- en betekenisvectoren om de overeenkomst te berekenen tussen de definitie van een betekenis en de context van een woord. We gebruiken onze methode bovendien voor het localiseren en -interpreteren van woordgrapjes.Ten slotte presenteren we een model voor lexicale keuze dat lemma's selecteert, gegeven de abstracte representaties van woorden. Dit doen we door de grammaticale bomen te converteren naar hidden Markov bomen. Op deze manier kan de optimale combinatie van lemmas en hun context berekend worden

    ANTELOPE - Une plateforme industrielle de traitement linguistique

    Get PDF
    International audienceThe Antelope linguistic platform, inspired by Meaning-Text Theory, targets the syntactic and semantic analysis of texts, and can handle large corpora. Antelope integrates several pre-existing (parsing) components as well as broad-coverage linguistic data originating from various sources. Efforts towards integration of all components nonetheless make for a homogeneous platform. Our direct contribution deals with components for semantic analysis, and the formalization of a unified text analysis model. This paper introduces the platform and compares it with state-of-the-art projects. It offers to the NLP community a feedback from a software company, by underlining the architectural measures that should be taken to ensure that such complex software remains maintainable.La plate-forme de traitement linguistique Antelope, en partie basée sur la Théorie Sens-Texte (TST), permet l'analyse syntaxique et sémantique de textes sur des corpus de volume important. Antelope intègre plusieurs composants préexistants (pour l'analyse syntaxique) ainsi que des données linguistiques à large couverture provenant de différentes sources. Un effort d'intégration permet néanmoins d'offrir une plate-forme homogène. Notre contribution directe concerne l'ajout de composants d'analyse sémantique et la formalisation d'un modèle linguistique unifié. Cet article présente la plate-forme et la compare à d'autres projets de référence. Il propose un retour d'expérience d'un éditeur de logiciel vers la communauté du TAL, en soulignant les précautions architecturales à prendre pour qu'un tel ensemble complexe reste maintenable
    corecore