1,024 research outputs found
CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks
Lake and Baroni (2018) introduced the SCAN dataset probing the ability of
seq2seq models to capture compositional generalizations, such as inferring the
meaning of "jump around" 0-shot from the component words. Recurrent networks
(RNNs) were found to completely fail the most challenging generalization cases.
We test here a convolutional network (CNN) on these tasks, reporting hugely
improved performance with respect to RNNs. Despite the big improvement, the CNN
has however not induced systematic rules, suggesting that the difference
between compositional and non-compositional behaviour is not clear-cut.Comment: accepted as a short paper at ACL 201
Combining Language and Vision with a Multimodal Skip-gram Model
We extend the SKIP-GRAM model of Mikolov et al. (2013a) by taking visual
information into account. Like SKIP-GRAM, our multimodal models (MMSKIP-GRAM)
build vector-based word representations by learning to predict linguistic
contexts in text corpora. However, for a restricted set of words, the models
are also exposed to visual representations of the objects they denote
(extracted from natural images), and must predict linguistic and visual
features jointly. The MMSKIP-GRAM models achieve good performance on a variety
of semantic benchmarks. Moreover, since they propagate visual information to
all words, we use them to improve image labeling and retrieval in the zero-shot
setup, where the test concepts are never seen during model training. Finally,
the MMSKIP-GRAM models discover intriguing visual properties of abstract words,
paving the way to realistic implementations of embodied theories of meaning.Comment: accepted at NAACL 2015, camera ready version, 11 page
- …