1,734 research outputs found
An Operation Sequence Model for Explainable Neural Machine Translation
We propose to achieve explainable neural machine translation (NMT) by
changing the output representation to explain itself. We present a novel
approach to NMT which generates the target sentence by monotonically walking
through the source sentence. Word reordering is modeled by operations which
allow setting markers in the target sentence and move a target-side write head
between those markers. In contrast to many modern neural models, our system
emits explicit word alignment information which is often crucial to practical
machine translation as it improves explainability. Our technique can outperform
a plain text system in terms of BLEU score under the recent Transformer
architecture on Japanese-English and Portuguese-English, and is within 0.5 BLEU
difference on Spanish-English
Recommended from our members
Source sentence simplification for statistical machine translation
Long sentences with complex syntax and long-distance dependencies pose difficulties for machine translation systems. Short sentences, on the other hand, are usually easier to translate. We study the potential of addressing this mismatch using text simplifi- cation: given a simplified version of the full input sentence, can we use it in addition to the full input to improve translation? We show that the spaces of original and simplified translations can be effectively combined using translation lattices and compare two decoding approaches to process both inputs at different levels of integration. We demonstrate on source-annotated portions of WMT test sets and on top of strong baseline systems combining hierarchical and neural translation for two language pairs that source simplification can help to improve translation quality.This work was supported by the EPSRC grant Improving Target Language Fluency in Statistical Machine Translation, grant number EP/L027623/1
Recommended from our members
The Roles of Language Models and Hierarchical Models in Neural Sequence-to-Sequence Prediction
With the advent of deep learning, research in many areas of machine learning is converging towards the same set of methods and models. For example, long short-term memory networks are not only popular for various tasks in natural language processing (NLP) such as speech recognition, machine translation, handwriting recognition, syntactic parsing, etc., but they are also applicable to seemingly unrelated fields such as robot control, time series prediction, and bioinformatics. Recent advances in contextual word embeddings like BERT boast with achieving state-of-the-art results on 11 NLP tasks with the same model. Before deep learning, a speech recognizer and a syntactic parser used to have little in common as systems were much more tailored towards the task at hand.
At the core of this development is the tendency to view each task as yet another data mapping problem, neglecting the particular characteristics and (soft) requirements tasks often have in practice. This often goes along with a sharp break of deep learning methods with previous research in the specific area. This work can be understood as an antithesis to this paradigm. We show how traditional symbolic statistical machine translation models can still improve neural machine translation (NMT) while reducing the risk for common pathologies of NMT such as hallucinations and neologisms. Other external symbolic models such as spell checkers and morphology databases help neural grammatical error correction. We also focus on language models that often do not play a role in vanilla end-to-end approaches and apply them in different ways to word reordering, grammatical error correction, low-resource NMT, and document-level NMT. Finally, we demonstrate the benefit of hierarchical models in sequence-to-sequence prediction. Hand-engineered covering grammars are effective in preventing catastrophic errors in neural text normalization systems. Our operation sequence model for interpretable NMT represents translation as a series of actions that modify the translation state, and can also be seen as derivation in a formal grammar.EPSRC grant EP/L027623/1
EPSRC Tier-2 capital grant EP/P020259/
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
In this paper, we propose a novel neural network model called RNN
Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN
encodes a sequence of symbols into a fixed-length vector representation, and
the other decodes the representation into another sequence of symbols. The
encoder and decoder of the proposed model are jointly trained to maximize the
conditional probability of a target sequence given a source sequence. The
performance of a statistical machine translation system is empirically found to
improve by using the conditional probabilities of phrase pairs computed by the
RNN Encoder-Decoder as an additional feature in the existing log-linear model.
Qualitatively, we show that the proposed model learns a semantically and
syntactically meaningful representation of linguistic phrases.Comment: EMNLP 201
- …