1,304 research outputs found

    Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning

    Full text link
    Visual question answering requires high-order reasoning about an image, which is a fundamental capability needed by machine systems to follow complex directives. Recently, modular networks have been shown to be an effective framework for performing visual reasoning tasks. While modular networks were initially designed with a degree of model transparency, their performance on complex visual reasoning benchmarks was lacking. Current state-of-the-art approaches do not provide an effective mechanism for understanding the reasoning process. In this paper, we close the performance gap between interpretable models and state-of-the-art visual reasoning methods. We propose a set of visual-reasoning primitives which, when composed, manifest as a model capable of performing complex reasoning tasks in an explicitly-interpretable manner. The fidelity and interpretability of the primitives' outputs enable an unparalleled ability to diagnose the strengths and weaknesses of the resulting model. Critically, we show that these primitives are highly performant, achieving state-of-the-art accuracy of 99.1% on the CLEVR dataset. We also show that our model is able to effectively learn generalized representations when provided a small amount of data containing novel object attributes. Using the CoGenT generalization task, we show more than a 20 percentage point improvement over the current state of the art.Comment: CVPR 2018 pre-prin

    Towards Multi-modal Interpretation and Explanation

    Get PDF
    Multimodal task processes on different modalities simultaneously. Visual Question Answering, as a type of multimodal task, aims to answer the natural question answering based on the given image. To understand and process the image, many models to solve the visual question answering task encode the object regions through the convolutional neural network based backbones. Such an image processing method captures the visual features of the object regions in the image. However, the relations between objects are also important information to comprehensively understand the image for answering the complex question, and whether such relational information is captured by the visual features of the object regions remains opaque. To explicitly extract such relational information in images for visual question answering tasks, this research explores an interpretable and structural graph representation to encode the relations between objects. This research works on the three variants of Visual Question Answering tasks with different types of images, including photo-realistic images, daily scene pictures and document pages. Different task-specific relational graphs have been used and proposed to explicitly capture and encode the relations to be used by the proposed models. Such a relational graph provides an interpretable representation of the model inputs and proves its effectiveness in improving the model performance in output prediction. In addition, to improve the interpretation of the model’s prediction, this research also explores the suitable local interpretation method to be applied to the VQA model
    • …
    corecore