9,057 research outputs found
Attentional Encoder Network for Targeted Sentiment Classification
Targeted sentiment classification aims at determining the sentimental
tendency towards specific targets. Most of the previous approaches model
context and target words with RNN and attention. However, RNNs are difficult to
parallelize and truncated backpropagation through time brings difficulty in
remembering long-term patterns. To address this issue, this paper proposes an
Attentional Encoder Network (AEN) which eschews recurrence and employs
attention based encoders for the modeling between context and target. We raise
the label unreliability issue and introduce label smoothing regularization. We
also apply pre-trained BERT to this task and obtain new state-of-the-art
results. Experiments and analysis demonstrate the effectiveness and lightweight
of our model.Comment: 7 page
Selective Attention for Context-aware Neural Machine Translation
Despite the progress made in sentence-level NMT, current systems still fall
short at achieving fluent, good quality translation for a full document. Recent
works in context-aware NMT consider only a few previous sentences as context
and may not scale to entire documents. To this end, we propose a novel and
scalable top-down approach to hierarchical attention for context-aware NMT
which uses sparse attention to selectively focus on relevant sentences in the
document context and then attends to key words in those sentences. We also
propose single-level attention approaches based on sentence or word-level
information in the context. The document-level context representation, produced
from these attention modules, is integrated into the encoder or decoder of the
Transformer model depending on whether we use monolingual or bilingual context.
Our experiments and evaluation on English-German datasets in different document
MT settings show that our selective attention approach not only significantly
outperforms context-agnostic baselines but also surpasses context-aware
baselines in most cases.Comment: Accepted at NAACL-HLT 201
Compositional Morphology for Word Representations and Language Modelling
This paper presents a scalable method for integrating compositional
morphological representations into a vector-based probabilistic language model.
Our approach is evaluated in the context of log-bilinear language models,
rendered suitably efficient for implementation inside a machine translation
decoder by factoring the vocabulary. We perform both intrinsic and extrinsic
evaluations, presenting results on a range of languages which demonstrate that
our model learns morphological representations that both perform well on word
similarity tasks and lead to substantial reductions in perplexity. When used
for translation into morphologically rich languages with large vocabularies,
our models obtain improvements of up to 1.2 BLEU points relative to a baseline
system using back-off n-gram models.Comment: Proceedings of the 31st International Conference on Machine Learning
(ICML
- …