3 research outputs found

    Towards Multi-modal Interpretation and Explanation

    Get PDF
    Multimodal task processes on different modalities simultaneously. Visual Question Answering, as a type of multimodal task, aims to answer the natural question answering based on the given image. To understand and process the image, many models to solve the visual question answering task encode the object regions through the convolutional neural network based backbones. Such an image processing method captures the visual features of the object regions in the image. However, the relations between objects are also important information to comprehensively understand the image for answering the complex question, and whether such relational information is captured by the visual features of the object regions remains opaque. To explicitly extract such relational information in images for visual question answering tasks, this research explores an interpretable and structural graph representation to encode the relations between objects. This research works on the three variants of Visual Question Answering tasks with different types of images, including photo-realistic images, daily scene pictures and document pages. Different task-specific relational graphs have been used and proposed to explicitly capture and encode the relations to be used by the proposed models. Such a relational graph provides an interpretable representation of the model inputs and proves its effectiveness in improving the model performance in output prediction. In addition, to improve the interpretation of the model’s prediction, this research also explores the suitable local interpretation method to be applied to the VQA model
    corecore