8,368 research outputs found
Structure Learning for Neural Module Networks
Neural Module Networks, originally proposed for the task of visual question
answering, are a class of neural network architectures that involve
human-specified neural modules, each designed for a specific form of reasoning.
In current formulations of such networks only the parameters of the neural
modules and/or the order of their execution is learned. In this work, we
further expand this approach and also learn the underlying internal structure
of modules in terms of the ordering and combination of simple and elementary
arithmetic operators. Our results show that one is indeed able to
simultaneously learn both internal module structure and module sequencing
without extra supervisory signals for module execution sequencing. With this
approach, we report performance comparable to models using hand-designed
modules
Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning
Visual question answering requires high-order reasoning about an image, which
is a fundamental capability needed by machine systems to follow complex
directives. Recently, modular networks have been shown to be an effective
framework for performing visual reasoning tasks. While modular networks were
initially designed with a degree of model transparency, their performance on
complex visual reasoning benchmarks was lacking. Current state-of-the-art
approaches do not provide an effective mechanism for understanding the
reasoning process. In this paper, we close the performance gap between
interpretable models and state-of-the-art visual reasoning methods. We propose
a set of visual-reasoning primitives which, when composed, manifest as a model
capable of performing complex reasoning tasks in an explicitly-interpretable
manner. The fidelity and interpretability of the primitives' outputs enable an
unparalleled ability to diagnose the strengths and weaknesses of the resulting
model. Critically, we show that these primitives are highly performant,
achieving state-of-the-art accuracy of 99.1% on the CLEVR dataset. We also show
that our model is able to effectively learn generalized representations when
provided a small amount of data containing novel object attributes. Using the
CoGenT generalization task, we show more than a 20 percentage point improvement
over the current state of the art.Comment: CVPR 2018 pre-prin
- …