9,120 research outputs found
A Correlational Encoder Decoder Architecture for Pivot Based Sequence Generation
Interlingua based Machine Translation (MT) aims to encode multiple languages
into a common linguistic representation and then decode sentences in multiple
target languages from this representation. In this work we explore this idea in
the context of neural encoder decoder architectures, albeit on a smaller scale
and without MT as the end goal. Specifically, we consider the case of three
languages or modalities X, Z and Y wherein we are interested in generating
sequences in Y starting from information available in X. However, there is no
parallel training data available between X and Y but, training data is
available between X & Z and Z & Y (as is often the case in many real world
applications). Z thus acts as a pivot/bridge. An obvious solution, which is
perhaps less elegant but works very well in practice is to train a two stage
model which first converts from X to Z and then from Z to Y. Instead we explore
an interlingua inspired solution which jointly learns to do the following (i)
encode X and Z to a common representation and (ii) decode Y from this common
representation. We evaluate our model on two tasks: (i) bridge transliteration
and (ii) bridge captioning. We report promising results in both these
applications and believe that this is a right step towards truly interlingua
inspired encoder decoder architectures.Comment: 10 page
Character-level Transformer-based Neural Machine Translation
Neural machine translation (NMT) is nowadays commonly applied at the subword
level, using byte-pair encoding. A promising alternative approach focuses on
character-level translation, which simplifies processing pipelines in NMT
considerably. This approach, however, must consider relatively longer
sequences, rendering the training process prohibitively expensive. In this
paper, we discuss a novel, Transformer-based approach, that we compare, both in
speed and in quality to the Transformer at subword and character levels, as
well as previously developed character-level models. We evaluate our models on
4 language pairs from WMT'15: DE-EN, CS-EN, FI-EN and RU-EN. The proposed novel
architecture can be trained on a single GPU and is 34% percent faster than the
character-level Transformer; still, the obtained results are at least on par
with it. In addition, our proposed model outperforms the subword-level model in
FI-EN and shows close results in CS-EN. To stimulate further research in this
area and close the gap with subword-level NMT, we make all our code and models
publicly available
Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning
Recently there has been a lot of interest in learning common representations
for multiple views of data. Typically, such common representations are learned
using a parallel corpus between the two views (say, 1M images and their English
captions). In this work, we address a real-world scenario where no direct
parallel data is available between two views of interest (say, and )
but parallel data is available between each of these views and a pivot view
(). We propose a model for learning a common representation for ,
and using only the parallel data available between and
. The proposed model is generic and even works when there are views
of interest and only one pivot view which acts as a bridge between them. There
are two specific downstream applications that we focus on (i) transfer learning
between languages ,,..., using a pivot language and (ii)
cross modal access between images and a language using a pivot language
. Our model achieves state-of-the-art performance in multilingual document
classification on the publicly available multilingual TED corpus and promising
results in multilingual multimodal retrieval on a new dataset created and
released as a part of this work.Comment: Published at NAACL-HLT 201
- …