312 research outputs found
BattRAE: Bidimensional Attention-Based Recursive Autoencoders for Learning Bilingual Phrase Embeddings
In this paper, we propose a bidimensional attention based recursive
autoencoder (BattRAE) to integrate clues and sourcetarget interactions at
multiple levels of granularity into bilingual phrase representations. We employ
recursive autoencoders to generate tree structures of phrases with embeddings
at different levels of granularity (e.g., words, sub-phrases and phrases). Over
these embeddings on the source and target side, we introduce a bidimensional
attention network to learn their interactions encoded in a bidimensional
attention matrix, from which we extract two soft attention weight distributions
simultaneously. These weight distributions enable BattRAE to generate
compositive phrase representations via convolution. Based on the learned phrase
representations, we further use a bilinear neural model, trained via a
max-margin method, to measure bilingual semantic similarity. To evaluate the
effectiveness of BattRAE, we incorporate this semantic similarity as an
additional feature into a state-of-the-art SMT system. Extensive experiments on
NIST Chinese-English test sets show that our model achieves a substantial
improvement of up to 1.63 BLEU points on average over the baseline.Comment: 7 pages, accepted by AAAI 201
Word Representation Models for Morphologically Rich Languages in Neural Machine Translation
Dealing with the complex word forms in morphologically rich languages is an
open problem in language processing, and is particularly important in
translation. In contrast to most modern neural systems of translation, which
discard the identity for rare words, in this paper we propose several
architectures for learning word representations from character and morpheme
level word decompositions. We incorporate these representations in a novel
machine translation model which jointly learns word alignments and translations
via a hard attention mechanism. Evaluating on translating from several
morphologically rich languages into English, we show consistent improvements
over strong baseline methods, of between 1 and 1.5 BLEU points
Distributional semantics and machine learning for statistical machine translation
[EU]Lan honetan semantika distribuzionalaren eta ikasketa automatikoaren erabilera aztertzen
dugu itzulpen automatiko estatistikoa hobetzeko. Bide horretan, erregresio logistikoan
oinarritutako ikasketa automatikoko eredu bat proposatzen dugu hitz-segiden itzulpen-
probabilitatea modu dinamikoan modelatzeko. Proposatutako eredua itzulpen automatiko
estatistikoko ohiko itzulpen-probabilitateen orokortze bat dela frogatzen dugu, eta testuinguruko nahiz semantika distribuzionaleko informazioa barneratzeko baliatu ezaugarri
lexiko, hitz-cluster eta hitzen errepresentazio bektorialen bidez. Horretaz gain, semantika
distribuzionaleko ezagutza itzulpen automatiko estatistikoan txertatzeko beste hurbilpen
bat lantzen dugu: hitzen errepresentazio bektorial elebidunak erabiltzea hitz-segiden
itzulpenen antzekotasuna modelatzeko. Gure esperimentuek proposatutako ereduen baliagarritasuna erakusten dute, emaitza itxaropentsuak eskuratuz oinarrizko sistema sendo
baten gainean. Era berean, gure lanak ekarpen garrantzitsuak egiten ditu errepresentazio
bektorialen mapaketa elebidunei eta hitzen errepresentazio bektorialetan oinarritutako
hitz-segiden antzekotasun neurriei dagokienean, itzulpen automatikoaz haratago balio
propio bat dutenak semantika distribuzionalaren arloan.[EN]In this work, we explore the use of distributional semantics and machine learning to
improve statistical machine translation. For that purpose, we propose the use of a logistic
regression based machine learning model for dynamic phrase translation probability mod-
eling. We prove that the proposed model can be seen as a generalization of the standard
translation probabilities used in statistical machine translation, and use it to incorporate
context and distributional semantic information through lexical, word cluster and word
embedding features. Apart from that, we explore the use of word embeddings for phrase
translation probability scoring as an alternative approach to incorporate distributional
semantic knowledge into statistical machine translation. Our experiments show the
effectiveness of the proposed models, achieving promising results over a strong baseline.
At the same time, our work makes important contributions in relation to bilingual word
embedding mappings and word embedding based phrase similarity measures, which go be-
yond machine translation and have an intrinsic value in the field of distributional semantics
An empirical analysis of phrase-based and neural machine translation
Two popular types of machine translation (MT) are phrase-based and neural
machine translation systems. Both of these types of systems are composed of
multiple complex models or layers. Each of these models and layers learns
different linguistic aspects of the source language. However, for some of these
models and layers, it is not clear which linguistic phenomena are learned or
how this information is learned. For phrase-based MT systems, it is often clear
what information is learned by each model, and the question is rather how this
information is learned, especially for its phrase reordering model. For neural
machine translation systems, the situation is even more complex, since for many
cases it is not exactly clear what information is learned and how it is
learned.
To shed light on what linguistic phenomena are captured by MT systems, we
analyze the behavior of important models in both phrase-based and neural MT
systems. We consider phrase reordering models from phrase-based MT systems to
investigate which words from inside of a phrase have the biggest impact on
defining the phrase reordering behavior. Additionally, to contribute to the
interpretability of neural MT systems we study the behavior of the attention
model, which is a key component in neural MT systems and the closest model in
functionality to phrase reordering models in phrase-based systems. The
attention model together with the encoder hidden state representations form the
main components to encode source side linguistic information in neural MT. To
this end, we also analyze the information captured in the encoder hidden state
representations of a neural MT system. We investigate the extent to which
syntactic and lexical-semantic information from the source side is captured by
hidden state representations of different neural MT architectures.Comment: PhD thesis, University of Amsterdam, October 2020.
https://pure.uva.nl/ws/files/51388868/Thesis.pd
Dependency-based Bilingual Word Embeddings and Neural Machine Translation
Bilingual word embeddings, which represent lexicons from various languages in a
common embedding space, are critical for facilitating semantic and knowledge trans-
fers in a wide range of cross-lingual NLP applications. The significance of learning
bilingual word embedding representations in many Natural Language Processing
(NLP) tasks motivates us to investigate the effect of many factors, including syntac-
tical information, on the learning process for different languages with varying levels
of structural complexity. By analysing the components that influence the learning
process of bilingual word embeddings (BWEs), this thesis examines some factors for
learning bilingual word embeddings effectively. Our findings in this thesis demon-
strate that increasing the embedding size for language pairs has a positive impact
on the learning process for BWEs. While sentence length depends on the language.
Short sentences perform better than long ones in the En-ES experiment. However,
by increasing the sentence, En-Ar and En-De experiment achieve improved model
accuracy. Arabic segmentation, according to En-Ar experiments, is essential to the
learning process for BWEs and can boost model accuracy by up to 10%.
Incorporating dependency features into the learning process enhances the trained
models performance and results in more improved BWEs in all language pairs.
Finally, we investigated how the dependancy-based pretrained BWEs affected the
neural machine translation (NMT) model. The findings indicate that in various
MT evaluation matrices, the trained dependancy-based NMT models outperform
the baseline NMT model
ParaNMT-50M: Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations
We describe PARANMT-50M, a dataset of more than 50 million English-English
sentential paraphrase pairs. We generated the pairs automatically by using
neural machine translation to translate the non-English side of a large
parallel corpus, following Wieting et al. (2017). Our hope is that ParaNMT-50M
can be a valuable resource for paraphrase generation and can provide a rich
source of semantic knowledge to improve downstream natural language
understanding tasks. To show its utility, we use ParaNMT-50M to train
paraphrastic sentence embeddings that outperform all supervised systems on
every SemEval semantic textual similarity competition, in addition to showing
how it can be used for paraphrase generation
- …