10,981 research outputs found

    Modeling contextual information in neural machine translation

    Get PDF
    Machine translation has provided impressive translation quality for many language pairs. The improvements over the past few years are largely due to the introduction of neural networks to the field, resulting in the modern sequence-to-sequence neural machine translation models. NMT is at the core of many largescale industrial tools for automatic translation such as Google Translate, Microsoft Translator, Amazon Translate and many others. Current NMT models work on the sentence-level, meaning they are used to translate individual sentences. However, for most practical use-cases, a user is interested in translating a document. In these cases, an MT tool splits a document into individual sentences and translates them independently. As a result, any dependencies between the sentences are ignored. This is likely to result in an incoherent document translation, mainly because of inconsistent translation of ambiguous source words or wrong translation of anaphoric pronouns. For example, it is undesirable to translate “bank” as a “financial bank” in one sentence and then later as a “river bank”. Furthermore, the translation of, e.g., the English third person pronoun “it” into German depends on the grammatical gender of the English antecedent’s German translation. NMT has shown that it has impressive modeling capabilities, but is nevertheless unable to model discourse-level phenomena as it needs access to contextual information. In this work, we study discourse-level phenomena in context-aware NMT. To facilitate the particular studies of interest, we propose several models capable of incorporating contextual information into standard sentence-level NMT models. We direct our focus on several discourse phenomena, namely, coreference (anaphora) resolution, coherence and cohesion. We discuss these phenomena in terms of how well can they be modeled by context-aware NMT, how can we improve upon current state-of-the-art as well as the optimal granularity at which these phenomena should be modeled. We further investigate domain as a factor in context-aware NMT. Finally, we investigate existing challenge sets for anaphora resolution evaluation and provide a robust alternative. We make the following contributions: i) We study the importance of coreference (anaphora) resolution and coherence for context-aware NMT by making use of oracle information specific to these phenomena. ii) We propose a method for improving performance on anaphora resolution based on curriculum learning which is inspired by the way humans organize learning. iii) We investigate the use of contextual information for better handling of domain information, in particular in the case of modeling multiple domains at once and when applied to zero-resource domains. iv) We present several context-aware models to enable us to examine the specific phenomena of interest we already mentioned. v) We study the optimal way of modeling local and global context and present a model theoretically capable of using very large document context. vi) We study the robustness of challenge sets for evaluation of anaphora resolution in MT by means of adversarial attacks and provide a template test set that robustly evaluates specific steps of an idealized coreference resolution pipeline for MT

    Learning and time : on using memory and curricula for language understanding

    Full text link
    Cette thèse présente quelques-unes des étapes entreprises pour pouvoir un jour résoudre le problème de la compréhension du langage naturel et d’apprentissage de dépendances à long terme, dans le but de développer de meilleurs algorithmes d’intelligence artificielle. Cette thèse est écrite comme une thèse par articles, et contient cinq publications scientifiques. Chacun de ces articles propose un nouveau modèle ou algorithme et démontre leur efficacité sur des problèmes qui impliquent des dépendances à long terme ou la compréhension du langage naturel. Malgré le fait que quelque uns de ces modèles n’ont été testés que sur une seule tâche (comme la traduction automatique neuronale), les méthodes proposées sont généralement applicables dans d’autres domaines et sur d’autres tâches. Dans l’introduction de la thèse, nous expliquons quelques concepts fondamentaux de l'entraînement de réseaux de neurones appliqués sur des données séquentielles. Tout d'abord, nous présentons succinctement les réseaux de neurones, puis, de façon plus détaillé, certains algorithmes et méthodes utilisés à travers cette thèse. Dans notre premier article, nous proposons une nouvelle méthode permettant d'utiliser la grande quantité de données monolingue disponible afin d'entraîner des modèles de traduction. Nous avons accompli cela en entraînant d’abord un modèle Long short-term memory (LSTM) sur un large corpus monolingue. Nous lions ensuite la sortie de la couche cachée du modèle avec celle d’un décodeur d’un modèle de traduction automatique. Ce dernier utilise un mécanisme d’attention et est entièrement entraîné par descente de gradient. Nous avons montré que la méthode proposée peut augmenter la performance des modèles de traduction automatique neuronale de façon significative sur les tâches où peu de données multilingues sont disponibles. Notre approche augmente également l’efficacité de l’utilisation des données dans les systèmes de traduction automatique. Nous montrons aussi des améliorations sur les paires de langues suivantes: turc-anglais, allemand-anglais, chinois-anglais et tchèque-anglais. Dans notre deuxième article, nous proposons une approche pour aborder le problème des mots rares dans plusieurs tâches du traitement des langages. Notre approche modifie l’architecture habituelle des modèles encodeur-décodeur avec attention, en remplaçant la couche softmax du décodeur par notre couche pointer-softmax. Celle-ci permet au décodeur de pointer à différents endroits dans la phrase d’origine. Notre modèle apprend à alterner entre copier un mot de la phrase d’origine et prédire un mot provenant d’une courte liste de mots prédéfinie, de manière probabiliste. L’approche que nous avons proposée est entièrement entraînable par descente de gradient et n’utilise qu’un objectif de maximum de vraisemblance sur les tâches de traduction. Nous avons aussi montré que le pointer-softmax aide de manière significative aux tâches de traduction et de synthèse de documents. Dans notre article "Plan, Attend, Generate: Planning for Sequence-to-Sequence Models", nous proposons deux approches pour apprendre l’alignement dans les modèles entraînés sur des séquences. Lorsque la longueur de l’entrée et celle de la sortie sont trop grandes, apprendre les alignements peut être très difficile. La raison est que lorsque le décodeur est trop puissant, il a tendance à ignorer l’alignement des mots pour ne se concentrer que sur le dernier mot de la séquence d’entrée. Nous avons proposé une nouvelle approche, inspirée d’un algorithme d’apprentissage par renforcement, en ajoutant explicitement un mécanisme de planification au décodeur. Ce nouveau mécanisme planifie à l’avance l’alignement pour les k prochaines prédictions. Notre modèle apprend également un plan de correction pour déterminer lorsqu’il est nécessaire de recalculer les alignements. Notre approche peut apprendre de haut niveaux d’abstraction au point de vue temporel et nous montrons que les alignements sont généralement de meilleure qualité. Nous obtenons également des gains de performance significatifs comparativement à notre modèle de référence, malgré le fait que nos modèles ont moins de paramètres et qu’ils aient été entraînés moins longtemps. Dans notre article "Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes", nous proposons une nouvelle approche pour ajouter de manière explicite un mécanisme de mémoire aux réseaux de neurones. Contrairement aux RNNs conventionnels, la mémoire n’est pas seulement représentée au niveau des activations du réseau, mais également dans une mémoire externe. Notre modèle, D-NTM, utilise un mécanisme d’adressage plus simple que les Neural Turing Machine (NTM) en utilisant des paires clé-valeur. Nous montrons que les modèles disposant de ce nouveau mécanisme peuvent plus efficacement apprendre les dépendances à long terme, en plus de mieux généraliser. Nous obtenons des améliorations sur plusieurs tâches incluant entre autres la réponse aux questions sur bAbI, le raisonnement avec implication, MNIST permuté, ainsi que des tâches synthétiques. Dans notre article "Noisy Activation Functions", nous proposons une nouvelle fonction d’activation, qui rend les activations stochastiques en leur ajoutant du bruit. Notre motivation dans cet article est d’aborder les problèmes d’optimisation qui surviennent lorsque nous utilisons des fonctions d’activation qui saturent, comme celles généralement utilisées dans les RNNs. Notre approche permet d’utiliser des fonctions d’activation linéaires par morceaux sur les RNNs à porte. Nous montrons des améliorations pour un grand nombre de tâches sans effectuer de recherche d'hyper paramètres intensive. Nous montrons également que supprimer le bruit dans les fonctions d’activation a un profond impact sur l’optimisation.The goal of this thesis is to present some of the small steps taken on the path towards solving natural language understanding and learning long-term dependencies to develop artificial intelligence algorithms that can reason with language. This thesis is written as a thesis by articles and contains five articles. Each article in this thesis proposes a new model or algorithm and demonstrates the efficiency of the proposed approach to solve problems that involve long-term dependencies or require natural language understanding. Although some of the models are tested on a particular task (such as neural machine translation), the proposed methods in this thesis are generally applicable to other domains and tasks (and have been used in the literature). In the introduction of this thesis, we introduce some of the fundamental concepts behind training sequence models using neural networks. We first provide a brief introduction to neural networks and then dive into details of the some of approaches and algorithms that are used throughout this thesis. In our first article, we propose a novel method to utilize the abundant amount of available monolingual data for training neural machine translation models. We have accomplished this goal by first training a long short-term memory (LSTM) language model on a large monolingual corpus and then fusing the outputs or the hidden states of the LSTM language model with the decoder of the neural machine translation model. Our neural machine translation model is trained end to end with an attention mechanism. We have shown that our proposed approaches can improve the performance of the neural machine translation models significantly on the rare resource translation tasks and our approach improved the data-efficiency of the end to end neural machine translation systems. We report improvements on Turkish-English (Tr-En), German-English (De-En), Chinese-English (Zh-En) and Czech-English (Cz-En) translation tasks. In our second paper, we propose an approach to address the problem of rare words for natural language processing tasks. Our approach augments the encoder-decoder architecture with attention model by replacing the final softmax layer with our proposed pointer-softmax layer that creates pointers to the source sentences as the decoder translates. In the case of pointer-softmax, our model learns to switch between copying a word from the source and predicting a word from a shortlist vocabulary in a probabilistic manner. Our proposed approach is end-to-end trainable with a single maximum likelihood objective of the NMT model. We have also shown that it improves the performance of summarization and the neural machine translation model. We report significant improvements in machine translation and summarization tasks. In our "Plan, Attend, Generate: Planning for Sequence-to-Sequence Models" paper, we propose two new approaches to learn alignments in a sequence to sequence model. If the input and the source context is very long, learning the alignments for a sequence to sequence model can be difficult. In particular, because when the decoder is a large network, it can learn to ignore the alignments and attend more on the last token of the input sequence. We propose a new approach which is inspired by a hierarchical reinforcement learning algorithm and extend our model with an explicit planning mechanism. The proposed alignment mechanism plans and computes the alignments for the next kk tokens in the decoder. Our model also learns a commitment plan to decide when to recompute the alignment matrix. Our proposed approach can learn high-level temporal abstractions, and we show that it qualitatively learns better alignments. We also achieve significant improvements over our baseline despite using smaller models and with less training. In "Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes," we propose a new approach for augmenting neural networks with an explicit memory mechanism. As opposed to conventional RNNs, the memory is not only represented in the activations of the neural network but in an external memory that can be accessed via the neural network controller. Our model, D-NTM uses a more straightforward memory addressing mechanism than NTM which is achieved by using key-value pairs for each memory cell. We find out that the models augmented with an external memory mechanism can learn tasks that involve long-term dependencies more efficiently and achieve better generalization. We achieve improvements on many tasks including but not limited to episodic question answering on bAbI, reasoning with entailment, permuted MNIST task and synthetic tasks. In our "Noisy Activation Functions" paper, we propose a novel activation function that makes the activations stochastic by injecting a particular form of noise to them. Our motivation in this paper is to address the optimization problem of commonly used saturating activation functions that are used with the recurrent neural networks. Our approach enables us to use piece-wise linear activation functions on the gated recurrent neural network models. We show improvements in a wide range of tasks without doing any extensive hyperparameter search by a drop-in replacement. We also show that annealing the noise of the activation function can have a profound continuation-like effect on the optimization of the network

    IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages

    Full text link
    India has a rich linguistic landscape with languages from 4 major language families spoken by over a billion people. 22 of these languages are listed in the Constitution of India (referred to as scheduled languages) are the focus of this work. Given the linguistic diversity, high-quality and accessible Machine Translation (MT) systems are essential in a country like India. Prior to this work, there was (i) no parallel training data spanning all the 22 languages, (ii) no robust benchmarks covering all these languages and containing content relevant to India, and (iii) no existing translation models which support all the 22 scheduled languages of India. In this work, we aim to address this gap by focusing on the missing pieces required for enabling wide, easy, and open access to good machine translation systems for all 22 scheduled Indian languages. We identify four key areas of improvement: curating and creating larger training datasets, creating diverse and high-quality benchmarks, training multilingual models, and releasing models with open access. Our first contribution is the release of the Bharat Parallel Corpus Collection (BPCC), the largest publicly available parallel corpora for Indic languages. BPCC contains a total of 230M bitext pairs, of which a total of 126M were newly added, including 644K manually translated sentence pairs created as part of this work. Our second contribution is the release of the first n-way parallel benchmark covering all 22 Indian languages, featuring diverse domains, Indian-origin content, and source-original test sets. Next, we present IndicTrans2, the first model to support all 22 languages, surpassing existing models on multiple existing and new benchmarks created as a part of this work. Lastly, to promote accessibility and collaboration, we release our models and associated data with permissive licenses at https://github.com/ai4bharat/IndicTrans2
    • …
    corecore