2,066 research outputs found
Unsupervised Neural Machine Translation with SMT as Posterior Regularization
Without real bilingual corpus available, unsupervised Neural Machine
Translation (NMT) typically requires pseudo parallel data generated with the
back-translation method for the model training. However, due to weak
supervision, the pseudo data inevitably contain noises and errors that will be
accumulated and reinforced in the subsequent training process, leading to bad
translation performance. To address this issue, we introduce phrase based
Statistic Machine Translation (SMT) models which are robust to noisy data, as
posterior regularizations to guide the training of unsupervised NMT models in
the iterative back-translation process. Our method starts from SMT models built
with pre-trained language models and word-level translation tables inferred
from cross-lingual embeddings. Then SMT and NMT models are optimized jointly
and boost each other incrementally in a unified EM framework. In this way, (1)
the negative effect caused by errors in the iterative back-translation process
can be alleviated timely by SMT filtering noises from its phrase tables;
meanwhile, (2) NMT can compensate for the deficiency of fluency inherent in
SMT. Experiments conducted on en-fr and en-de translation tasks show that our
method outperforms the strong baseline and achieves new state-of-the-art
unsupervised machine translation performance.Comment: To be presented at AAAI 2019; 9 pages, 4 figure
Resolving Out-of-Vocabulary Words with Bilingual Embeddings in Machine Translation
Out-of-vocabulary words account for a large proportion of errors in machine translation systems, especially when the system is used on a different domain than the one where it was trained. In order to alleviate the problem, we propose to use a log-bilinear softmax-based model for vocabulary expansion, such that given an out-of-vocabulary source word, the model generates a probabilistic list of possible translations in the target language. Our model uses only word embeddings trained on significantly large unlabelled monolingual corpora and trains over a fairly small, word-to-word bilingual dictionary. We input this probabilistic list into a standard phrase-based statistical machine translation system and obtain consistent improvements in translation quality on the English-Spanish language pair. Especially, we get an improvement of 3.9 BLEU points when tested over an out-of-domain test set
Distributional semantics and machine learning for statistical machine translation
[EU]Lan honetan semantika distribuzionalaren eta ikasketa automatikoaren erabilera aztertzen
dugu itzulpen automatiko estatistikoa hobetzeko. Bide horretan, erregresio logistikoan
oinarritutako ikasketa automatikoko eredu bat proposatzen dugu hitz-segiden itzulpen-
probabilitatea modu dinamikoan modelatzeko. Proposatutako eredua itzulpen automatiko
estatistikoko ohiko itzulpen-probabilitateen orokortze bat dela frogatzen dugu, eta testuinguruko nahiz semantika distribuzionaleko informazioa barneratzeko baliatu ezaugarri
lexiko, hitz-cluster eta hitzen errepresentazio bektorialen bidez. Horretaz gain, semantika
distribuzionaleko ezagutza itzulpen automatiko estatistikoan txertatzeko beste hurbilpen
bat lantzen dugu: hitzen errepresentazio bektorial elebidunak erabiltzea hitz-segiden
itzulpenen antzekotasuna modelatzeko. Gure esperimentuek proposatutako ereduen baliagarritasuna erakusten dute, emaitza itxaropentsuak eskuratuz oinarrizko sistema sendo
baten gainean. Era berean, gure lanak ekarpen garrantzitsuak egiten ditu errepresentazio
bektorialen mapaketa elebidunei eta hitzen errepresentazio bektorialetan oinarritutako
hitz-segiden antzekotasun neurriei dagokienean, itzulpen automatikoaz haratago balio
propio bat dutenak semantika distribuzionalaren arloan.[EN]In this work, we explore the use of distributional semantics and machine learning to
improve statistical machine translation. For that purpose, we propose the use of a logistic
regression based machine learning model for dynamic phrase translation probability mod-
eling. We prove that the proposed model can be seen as a generalization of the standard
translation probabilities used in statistical machine translation, and use it to incorporate
context and distributional semantic information through lexical, word cluster and word
embedding features. Apart from that, we explore the use of word embeddings for phrase
translation probability scoring as an alternative approach to incorporate distributional
semantic knowledge into statistical machine translation. Our experiments show the
effectiveness of the proposed models, achieving promising results over a strong baseline.
At the same time, our work makes important contributions in relation to bilingual word
embedding mappings and word embedding based phrase similarity measures, which go be-
yond machine translation and have an intrinsic value in the field of distributional semantics
BattRAE: Bidimensional Attention-Based Recursive Autoencoders for Learning Bilingual Phrase Embeddings
In this paper, we propose a bidimensional attention based recursive
autoencoder (BattRAE) to integrate clues and sourcetarget interactions at
multiple levels of granularity into bilingual phrase representations. We employ
recursive autoencoders to generate tree structures of phrases with embeddings
at different levels of granularity (e.g., words, sub-phrases and phrases). Over
these embeddings on the source and target side, we introduce a bidimensional
attention network to learn their interactions encoded in a bidimensional
attention matrix, from which we extract two soft attention weight distributions
simultaneously. These weight distributions enable BattRAE to generate
compositive phrase representations via convolution. Based on the learned phrase
representations, we further use a bilinear neural model, trained via a
max-margin method, to measure bilingual semantic similarity. To evaluate the
effectiveness of BattRAE, we incorporate this semantic similarity as an
additional feature into a state-of-the-art SMT system. Extensive experiments on
NIST Chinese-English test sets show that our model achieves a substantial
improvement of up to 1.63 BLEU points on average over the baseline.Comment: 7 pages, accepted by AAAI 201
The TALP–UPC Spanish–English WMT biomedical task: bilingual embeddings and char-based neural language model rescoring in a phrase-based system
This paper describes the TALP–UPC system in the Spanish–English WMT 2016 biomedical shared task. Our system is a standard phrase-based system enhanced with vocabulary expansion using bilingual word embeddings and a characterbased neural language model with rescoring. The former focuses on resolving outof- vocabulary words, while the latter enhances the fluency of the system. The two modules progressively improve the final translation as measured by a combination of several lexical metrics.Postprint (published version
- …