1 research outputs found
Weakly Supervised Visual Question Answer Generation
Growing interest in conversational agents promote twoway human-computer
communications involving asking and answering visual questions have become an
active area of research in AI. Thus, generation of visual questionanswer
pair(s) becomes an important and challenging task. To address this issue, we
propose a weakly-supervised visual question answer generation method that
generates a relevant question-answer pairs for a given input image and
associated caption. Most of the prior works are supervised and depend on the
annotated question-answer datasets. In our work, we present a weakly supervised
method that synthetically generates question-answer pairs procedurally from
visual information and captions. The proposed method initially extracts list of
answer words, then does nearest question generation that uses the caption and
answer word to generate synthetic question. Next, the relevant question
generator converts the nearest question to relevant language question by
dependency parsing and in-order tree traversal, finally, fine-tune a ViLBERT
model with the question-answer pair(s) generated at end. We perform an
exhaustive experimental analysis on VQA dataset and see that our model
significantly outperform SOTA methods on BLEU scores. We also show the results
wrt baseline models and ablation study