3,877 research outputs found
Overcoming the Rare Word Problem for Low-Resource Language Pairs in Neural Machine Translation
Among the six challenges of neural machine translation (NMT) coined by (Koehn
and Knowles, 2017), rare-word problem is considered the most severe one,
especially in translation of low-resource languages. In this paper, we propose
three solutions to address the rare words in neural machine translation
systems. First, we enhance source context to predict the target words by
connecting directly the source embeddings to the output of the attention
component in NMT. Second, we propose an algorithm to learn morphology of
unknown words for English in supervised way in order to minimize the adverse
effect of rare-word problem. Finally, we exploit synonymous relation from the
WordNet to overcome out-of-vocabulary (OOV) problem of NMT. We evaluate our
approaches on two low-resource language pairs: English-Vietnamese and
Japanese-Vietnamese. In our experiments, we have achieved significant
improvements of up to roughly +1.0 BLEU points in both language pairs
Benefits of data augmentation for NMT-based text normalization of user-generated content
One of the most persistent characteristics of written user-generated content (UGC) is the use of non-standard words. This characteristic contributes to an increased difficulty to automatically process and analyze UGC. Text normalization is the task of transforming lexical variants to their canonical forms and is often used as a pre-processing step for conventional NLP tasks in order to overcome the performance drop that NLP systems experience when applied to UGC. In this work, we follow a Neural Machine Translation approach to text normalization. To train such an encoder-decoder model, large parallel training corpora of sentence pairs are required. However, obtaining large data sets with UGC and their normalized version is not trivial, especially for languages other than English. In this paper, we explore how to overcome this data bottleneck for Dutch, a low-resource language. We start off with a publicly available tiny parallel Dutch data set comprising three UGC genres and compare two different approaches. The first is to manually normalize and add training data, a money and time-consuming task. The second approach is a set of data augmentation techniques which increase data size by converting existing resources into synthesized non-standard forms. Our results reveal that a combination of both approaches leads to the best results
Article Retracted
Article Retracte
Mimicking Word Embeddings using Subword RNNs
Word embeddings improve generalization over lexical features by placing each
word in a lower-dimensional space, using distributional information obtained
from unlabeled data. However, the effectiveness of word embeddings for
downstream NLP tasks is limited by out-of-vocabulary (OOV) words, for which
embeddings do not exist. In this paper, we present MIMICK, an approach to
generating OOV word embeddings compositionally, by learning a function from
spellings to distributional embeddings. Unlike prior work, MIMICK does not
require re-training on the original word embedding corpus; instead, learning is
performed at the type level. Intrinsic and extrinsic evaluations demonstrate
the power of this simple approach. On 23 languages, MIMICK improves performance
over a word-based baseline for tagging part-of-speech and morphosyntactic
attributes. It is competitive with (and complementary to) a supervised
character-based model in low-resource settings.Comment: EMNLP 201
- …