134,173 research outputs found
Focus of Attention Improves Information Transfer in Visual Features
Unsupervised learning from continuous visual streams is a challenging problem
that cannot be naturally and efficiently managed in the classic batch-mode
setting of computation. The information stream must be carefully processed
accordingly to an appropriate spatio-temporal distribution of the visual data,
while most approaches of learning commonly assume uniform probability density.
In this paper we focus on unsupervised learning for transferring visual
information in a truly online setting by using a computational model that is
inspired to the principle of least action in physics. The maximization of the
mutual information is carried out by a temporal process which yields online
estimation of the entropy terms. The model, which is based on second-order
differential equations, maximizes the information transfer from the input to a
discrete space of symbols related to the visual features of the input, whose
computation is supported by hidden neurons. In order to better structure the
input probability distribution, we use a human-like focus of attention model
that, coherently with the information maximization model, is also based on
second-order differential equations. We provide experimental results to support
the theory by showing that the spatio-temporal filtering induced by the focus
of attention allows the system to globally transfer more information from the
input stream over the focused areas and, in some contexts, over the whole
frames with respect to the unfiltered case that yields uniform probability
distributions
Crossmodal Attentive Skill Learner
This paper presents the Crossmodal Attentive Skill Learner (CASL), integrated
with the recently-introduced Asynchronous Advantage Option-Critic (A2OC)
architecture [Harb et al., 2017] to enable hierarchical reinforcement learning
across multiple sensory inputs. We provide concrete examples where the approach
not only improves performance in a single task, but accelerates transfer to new
tasks. We demonstrate the attention mechanism anticipates and identifies useful
latent features, while filtering irrelevant sensor modalities during execution.
We modify the Arcade Learning Environment [Bellemare et al., 2013] to support
audio queries, and conduct evaluations of crossmodal learning in the Atari 2600
game Amidar. Finally, building on the recent work of Babaeizadeh et al. [2017],
we open-source a fast hybrid CPU-GPU implementation of CASL.Comment: International Conference on Autonomous Agents and Multiagent Systems
(AAMAS) 2018, NIPS 2017 Deep Reinforcement Learning Symposiu
Aligned Image-Word Representations Improve Inductive Transfer Across Vision-Language Tasks
An important goal of computer vision is to build systems that learn visual
representations over time that can be applied to many tasks. In this paper, we
investigate a vision-language embedding as a core representation and show that
it leads to better cross-task transfer than standard multi-task learning. In
particular, the task of visual recognition is aligned to the task of visual
question answering by forcing each to use the same word-region embeddings. We
show this leads to greater inductive transfer from recognition to VQA than
standard multitask learning. Visual recognition also improves, especially for
categories that have relatively few recognition training labels but appear
often in the VQA setting. Thus, our paper takes a small step towards creating
more general vision systems by showing the benefit of interpretable, flexible,
and trainable core representations.Comment: Accepted in ICCV 2017. The arxiv version has an extra analysis on
correlation with human attentio
Supervised and Unsupervised Transfer Learning for Question Answering
Although transfer learning has been shown to be successful for tasks like
object and speech recognition, its applicability to question answering (QA) has
yet to be well-studied. In this paper, we conduct extensive experiments to
investigate the transferability of knowledge learned from a source QA dataset
to a target dataset using two QA models. The performance of both models on a
TOEFL listening comprehension test (Tseng et al., 2016) and MCTest (Richardson
et al., 2013) is significantly improved via a simple transfer learning
technique from MovieQA (Tapaswi et al., 2016). In particular, one of the models
achieves the state-of-the-art on all target datasets; for the TOEFL listening
comprehension test, it outperforms the previous best model by 7%. Finally, we
show that transfer learning is helpful even in unsupervised scenarios when
correct answers for target QA dataset examples are not available.Comment: To appear in NAACL HLT 2018 (long paper
- …