12 research outputs found
Open-Ended Multi-Modal Relational Reason for Video Question Answering
People with visual impairments urgently need helps, not only on the basic
tasks such as guiding and retrieving objects , but on the advanced tasks like
picturing the new environments. More than a guiding dog, they might want some
devices which are able to provide linguistic interaction. Building on various
research literature, we aim to conduct a research on the interaction between
the robot agent and visual impaired people. The robot agent, applied VQA
techniques, is able to analyze the environment, process and understand the
pronouncing questions, and provide feedback to the human user. In this paper,
we are going to discuss the related questions about this kind of interaction,
the techniques we used in this work, and how we conduct our research
Structured Co-reference Graph Attention for Video-grounded Dialogue
A video-grounded dialogue system referred to as the Structured Co-reference
Graph Attention (SCGA) is presented for decoding the answer sequence to a
question regarding a given video while keeping track of the dialogue context.
Although recent efforts have made great strides in improving the quality of the
response, performance is still far from satisfactory. The two main challenging
issues are as follows: (1) how to deduce co-reference among multiple modalities
and (2) how to reason on the rich underlying semantic structure of video with
complex spatial and temporal dynamics. To this end, SCGA is based on (1)
Structured Co-reference Resolver that performs dereferencing via building a
structured graph over multiple modalities, (2) Spatio-temporal Video Reasoner
that captures local-to-global dynamics of video via gradually neighboring graph
attention. SCGA makes use of pointer network to dynamically replicate parts of
the question for decoding the answer sequence. The validity of the proposed
SCGA is demonstrated on AVSD@DSTC7 and AVSD@DSTC8 datasets, a challenging
video-grounded dialogue benchmarks, and TVQA dataset, a large-scale videoQA
benchmark. Our empirical results show that SCGA outperforms other
state-of-the-art dialogue systems on both benchmarks, while extensive ablation
study and qualitative analysis reveal performance gain and improved
interpretability.Comment: Accepted to AAAI202