12 research outputs found
Learning Visual Reasoning Without Strong Priors
Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively.Comment: Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/fil
Visual Concept Reasoning Networks
A split-transform-merge strategy has been broadly used as an architectural
constraint in convolutional neural networks for visual recognition tasks. It
approximates sparsely connected networks by explicitly defining multiple
branches to simultaneously learn representations with different visual concepts
or properties. Dependencies or interactions between these representations are
typically defined by dense and local operations, however, without any
adaptiveness or high-level reasoning. In this work, we propose to exploit this
strategy and combine it with our Visual Concept Reasoning Networks (VCRNet) to
enable reasoning between high-level visual concepts. We associate each branch
with a visual concept and derive a compact concept state by selecting a few
local descriptors through an attention module. These concept states are then
updated by graph-based interaction and used to adaptively modulate the local
descriptors. We describe our proposed model by
split-transform-attend-interact-modulate-merge stages, which are implemented by
opting for a highly modularized architecture. Extensive experiments on visual
recognition tasks such as image classification, semantic segmentation, object
detection, scene recognition, and action recognition show that our proposed
model, VCRNet, consistently improves the performance by increasing the number
of parameters by less than 1%.Comment: Preprin