2 research outputs found
Learning to Remember, Forget and Ignore using Attention Control in Memory
Typical neural networks with external memory do not effectively separate
capacity for episodic and working memory as is required for reasoning in
humans. Applying knowledge gained from psychological studies, we designed a new
model called Differentiable Working Memory (DWM) in order to specifically
emulate human working memory. As it shows the same functional characteristics
as working memory, it robustly learns psychology inspired tasks and converges
faster than comparable state-of-the-art models. Moreover, the DWM model
successfully generalizes to sequences two orders of magnitude longer than the
ones used in training. Our in-depth analysis shows that the behavior of DWM is
interpretable and that it learns to have fine control over memory, allowing it
to retain, ignore or forget information based on its relevance.Comment: 20 page
Transfer Learning in Visual and Relational Reasoning
Transfer learning has become the de facto standard in computer vision and
natural language processing, especially where labeled data is scarce. Accuracy
can be significantly improved by using pre-trained models and subsequent
fine-tuning. In visual reasoning tasks, such as image question answering,
transfer learning is more complex. In addition to transferring the capability
to recognize visual features, we also expect to transfer the system's ability
to reason. Moreover, for video data, temporal reasoning adds another dimension.
In this work, we formalize these unique aspects of transfer learning and
propose a theoretical framework for visual reasoning, exemplified by the
well-established CLEVR and COG datasets. Furthermore, we introduce a new,
end-to-end differentiable recurrent model (SAMNet), which shows
state-of-the-art accuracy and better performance in transfer learning on both
datasets. The improved performance of SAMNet stems from its capability to
decouple the abstract multi-step reasoning from the length of the sequence and
its selective attention enabling to store only the question-relevant objects in
the external memory.Comment: 18 pages; more baseline comparisons; additional clarification