5,378 research outputs found
Neural Machine Translation by Generating Multiple Linguistic Factors
Factored neural machine translation (FNMT) is founded on the idea of using
the morphological and grammatical decomposition of the words (factors) at the
output side of the neural network. This architecture addresses two well-known
problems occurring in MT, namely the size of target language vocabulary and the
number of unknown tokens produced in the translation. FNMT system is designed
to manage larger vocabulary and reduce the training time (for systems with
equivalent target language vocabulary size). Moreover, we can produce
grammatically correct words that are not part of the vocabulary. FNMT model is
evaluated on IWSLT'15 English to French task and compared to the baseline
word-based and BPE-based NMT systems. Promising qualitative and quantitative
results (in terms of BLEU and METEOR) are reported.Comment: 11 pages, 3 figues, SLSP conferenc
A Multiplicative Model for Learning Distributed Text-Based Attribute Representations
In this paper we propose a general framework for learning distributed
representations of attributes: characteristics of text whose representations
can be jointly learned with word embeddings. Attributes can correspond to
document indicators (to learn sentence vectors), language indicators (to learn
distributed language representations), meta-data and side information (such as
the age, gender and industry of a blogger) or representations of authors. We
describe a third-order model where word context and attribute vectors interact
multiplicatively to predict the next word in a sequence. This leads to the
notion of conditional word similarity: how meanings of words change when
conditioned on different attributes. We perform several experimental tasks
including sentiment classification, cross-lingual document classification, and
blog authorship attribution. We also qualitatively evaluate conditional word
neighbours and attribute-conditioned text generation.Comment: 11 pages. An earlier version was accepted to the ICML-2014 Workshop
on Knowledge-Powered Deep Learning for Text Minin
Generating High-Quality Surface Realizations Using Data Augmentation and Factored Sequence Models
This work presents a new state of the art in reconstruction of surface
realizations from obfuscated text. We identify the lack of sufficient training
data as the major obstacle to training high-performing models, and solve this
issue by generating large amounts of synthetic training data. We also propose
preprocessing techniques which make the structure contained in the input
features more accessible to sequence models. Our models were ranked first on
all evaluation metrics in the English portion of the 2018 Surface Realization
shared task
Compositional Morphology for Word Representations and Language Modelling
This paper presents a scalable method for integrating compositional
morphological representations into a vector-based probabilistic language model.
Our approach is evaluated in the context of log-bilinear language models,
rendered suitably efficient for implementation inside a machine translation
decoder by factoring the vocabulary. We perform both intrinsic and extrinsic
evaluations, presenting results on a range of languages which demonstrate that
our model learns morphological representations that both perform well on word
similarity tasks and lead to substantial reductions in perplexity. When used
for translation into morphologically rich languages with large vocabularies,
our models obtain improvements of up to 1.2 BLEU points relative to a baseline
system using back-off n-gram models.Comment: Proceedings of the 31st International Conference on Machine Learning
(ICML
Character-Aware Neural Language Models
We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information.Comment: AAAI 201
- …