4 research outputs found
Visual question answering using external knowledge
Accurately answering a question about a given image requires combining observations with general knowledge. While this is effortless for humans, reasoning with general knowledge remains an algorithmic challenge. To advance research in this direction, a novel `fact-based' visual question answering (FVQA) task has been introduced recently along with a large set of curated facts which link two entities, i.e., two possible answers, via a relation. Given a question-image pair, keyword matching techniques have been employed to successively reduce the large set of facts and were shown to yield compelling results despite being vulnerable to misconceptions due to synonyms and homographs.
To overcome these shortcomings, we introduce two new approaches in this work. We develop a learning-based approach which goes straight to the facts via a learned embedding space. We demonstrate state-of-the-art results on the challenging recently introduced factbased visual question answering dataset, outperforming competing methods by more than 5%. Upon further analysis, we observe that a successive process which considers one fact at a time to form a local decision is sub-optimal. To counter this, in our second approach we develop an entity graph and use a graph convolutional network to `reason' about the correct answer by jointly considering all entities. We show on the FVQA dataset that this leads to an improvement in accuracy of around 7% compared to the state-of-the-art
Recommended from our members
Multimodal Long-Term Video Understanding
The internet hosts an immense reservoir of videos, witnessing a constant influx of thousands ofuploads to platforms like YouTube every second. These videos represent a valuable repository
of multimodal information, providing an invaluable resource for understanding audio-visual-text
relationships. Moreover, understanding the content in long videos (think 2 hours), is an open problem.
This thesis investigates the intricate interplay between diverse modalities—audio, visual, and
textual—in videos and harnesses their potential for comprehending semantic nuances within long
videos. My research explores diverse strategies for combining information from these modalities,
leading to significant advancements in video summarization and instructional video analysis.
The first part introduces an approach to synthesizing long video textures from short clips by
rearranging segments coherently, while also considering audio conditioning. The second part
discusses a novel technique for generating concise visual summaries of lengthy videos guided by
natural language cues. Additionally, we focus specifically on summarizing instructional videos,
capitalizing on audio-visual alignments and task structures to produce informative summaries.To further enrich the comprehension of instructional videos, the thesis introduces a cutting-edgeapproach that facilitates the learning and verification of procedural steps within instructional content,
empowering the model to grasp long and complex video sequences and ensure procedural accuracy.
Lastly, the potential of large language models is explored for answering questions about images
through code generation. Through comprehensive experiments, the research demonstrates the
efficacy of the proposed methodologies, envisioning promising future prospects in the field of
semantics in long videos by integrating audio, visual, and textual relationships