15,763 research outputs found
Neural Machine Translation with Word Predictions
In the encoder-decoder architecture for neural machine translation (NMT), the
hidden states of the recurrent structures in the encoder and decoder carry the
crucial information about the sentence.These vectors are generated by
parameters which are updated by back-propagation of translation errors through
time. We argue that propagating errors through the end-to-end recurrent
structures are not a direct way of control the hidden vectors. In this paper,
we propose to use word predictions as a mechanism for direct supervision. More
specifically, we require these vectors to be able to predict the vocabulary in
target sentence. Our simple mechanism ensures better representations in the
encoder and decoder without using any extra data or annotation. It is also
helpful in reducing the target side vocabulary and improving the decoding
efficiency. Experiments on Chinese-English and German-English machine
translation tasks show BLEU improvements by 4.53 and 1.3, respectivelyComment: Accepted at EMNLP201
Revision in Continuous Space: Unsupervised Text Style Transfer without Adversarial Learning
Typical methods for unsupervised text style transfer often rely on two key
ingredients: 1) seeking the explicit disentanglement of the content and the
attributes, and 2) troublesome adversarial learning. In this paper, we show
that neither of these components is indispensable. We propose a new framework
that utilizes the gradients to revise the sentence in a continuous space during
inference to achieve text style transfer. Our method consists of three key
components: a variational auto-encoder (VAE), some attribute predictors (one
for each attribute), and a content predictor. The VAE and the two types of
predictors enable us to perform gradient-based optimization in the continuous
space, which is mapped from sentences in a discrete space, to find the
representation of a target sentence with the desired attributes and preserved
content. Moreover, the proposed method naturally has the ability to
simultaneously manipulate multiple fine-grained attributes, such as sentence
length and the presence of specific words, when performing text style transfer
tasks. Compared with previous adversarial learning based methods, the proposed
method is more interpretable, controllable and easier to train. Extensive
experimental studies on three popular text style transfer tasks show that the
proposed method significantly outperforms five state-of-the-art methods.Comment: Association for the Advancement of Artificial Intelligence. AAAI 202
Crosslingual Document Embedding as Reduced-Rank Ridge Regression
There has recently been much interest in extending vector-based word
representations to multiple languages, such that words can be compared across
languages. In this paper, we shift the focus from words to documents and
introduce a method for embedding documents written in any language into a
single, language-independent vector space. For training, our approach leverages
a multilingual corpus where the same concept is covered in multiple languages
(but not necessarily via exact translations), such as Wikipedia. Our method,
Cr5 (Crosslingual reduced-rank ridge regression), starts by training a
ridge-regression-based classifier that uses language-specific bag-of-word
features in order to predict the concept that a given document is about. We
show that, when constraining the learned weight matrix to be of low rank, it
can be factored to obtain the desired mappings from language-specific
bags-of-words to language-independent embeddings. As opposed to most prior
methods, which use pretrained monolingual word vectors, postprocess them to
make them crosslingual, and finally average word vectors to obtain document
vectors, Cr5 is trained end-to-end and is thus natively crosslingual as well as
document-level. Moreover, since our algorithm uses the singular value
decomposition as its core operation, it is highly scalable. Experiments show
that our method achieves state-of-the-art performance on a crosslingual
document retrieval task. Finally, although not trained for embedding sentences
and words, it also achieves competitive performance on crosslingual sentence
and word retrieval tasks.Comment: In The Twelfth ACM International Conference on Web Search and Data
Mining (WSDM '19
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Understanding predictions made by deep neural networks is notoriously
difficult, but also crucial to their dissemination. As all ML-based methods,
they are as good as their training data, and can also capture unwanted biases.
While there are tools that can help understand whether such biases exist, they
do not distinguish between correlation and causation, and might be ill-suited
for text-based models and for reasoning about high level language concepts. A
key problem of estimating the causal effect of a concept of interest on a given
model is that this estimation requires the generation of counterfactual
examples, which is challenging with existing generation technology. To bridge
that gap, we propose CausaLM, a framework for producing causal model
explanations using counterfactual language representation models. Our approach
is based on fine-tuning of deep contextualized embedding models with auxiliary
adversarial tasks derived from the causal graph of the problem. Concretely, we
show that by carefully choosing auxiliary adversarial pre-training tasks,
language representation models such as BERT can effectively learn a
counterfactual representation for a given concept of interest, and be used to
estimate its true causal effect on model performance. A byproduct of our method
is a language representation model that is unaffected by the tested concept,
which can be useful in mitigating unwanted bias ingrained in the data.Comment: Our code and data are available at:
https://amirfeder.github.io/CausaLM/ Under review for the Computational
Linguistics journa
Generating Diverse Translation by Manipulating Multi-Head Attention
Transformer model has been widely used on machine translation tasks and
obtained state-of-the-art results. In this paper, we report an interesting
phenomenon in its encoder-decoder multi-head attention: different attention
heads of the final decoder layer align to different word translation
candidates. We empirically verify this discovery and propose a method to
generate diverse translations by manipulating heads. Furthermore, we make use
of these diverse translations with the back-translation technique for better
data augmentation. Experiment results show that our method generates diverse
translations without severe drop in translation quality. Experiments also show
that back-translation with these diverse translations could bring significant
improvement on performance on translation tasks. An auxiliary experiment of
conversation response generation task proves the effect of diversity as well.Comment: Accepted by AAAI 202
- …