7 research outputs found
Structured Co-reference Graph Attention for Video-grounded Dialogue
A video-grounded dialogue system referred to as the Structured Co-reference
Graph Attention (SCGA) is presented for decoding the answer sequence to a
question regarding a given video while keeping track of the dialogue context.
Although recent efforts have made great strides in improving the quality of the
response, performance is still far from satisfactory. The two main challenging
issues are as follows: (1) how to deduce co-reference among multiple modalities
and (2) how to reason on the rich underlying semantic structure of video with
complex spatial and temporal dynamics. To this end, SCGA is based on (1)
Structured Co-reference Resolver that performs dereferencing via building a
structured graph over multiple modalities, (2) Spatio-temporal Video Reasoner
that captures local-to-global dynamics of video via gradually neighboring graph
attention. SCGA makes use of pointer network to dynamically replicate parts of
the question for decoding the answer sequence. The validity of the proposed
SCGA is demonstrated on AVSD@DSTC7 and AVSD@DSTC8 datasets, a challenging
video-grounded dialogue benchmarks, and TVQA dataset, a large-scale videoQA
benchmark. Our empirical results show that SCGA outperforms other
state-of-the-art dialogue systems on both benchmarks, while extensive ablation
study and qualitative analysis reveal performance gain and improved
interpretability.Comment: Accepted to AAAI202
Motion-Appearance Synergistic Networks for Video Question Answering
Video Question Answering is a task which requires an AI agent to answer questions grounded in video. This task entails three key challenges: (1)understand the intention of various questions, (2) capturing various elements of the input video (e.g., object, action, causality), and (3) cross-modal grounding between language and vision information. We propose MotionAppearance Synergistic Networks (MASN), which embed two crossmodal features grounded on motion and appearance information and selectively utilize them depending on the questionโs intentions. MASN consists of a motion module, an appearance module, and a motion-appearance fusion module. The motion module computes the action-oriented cross-modal joint representations, while the appearance module focuses on the appearance aspect of the input video. Finally, the motion-appearance fusion module takes each output of the motion module and the appearance module as input, and performs question-guided fusion. As a result, MASN achieves new state-of-the-art performance on the TGIF-QA and MSVD-QA datasets. We also conduct qualitative analysis by visualizing the inference results of MASN.๋น๋์ค ์ง์ ์๋ต์ AI ์์ด์ ํธ๊ฐ ์ฃผ์ด์ง ๋น๋์ค๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ๊ด๋ จ๋ ์ง๋ฌธ์ ์๋ตํ๋ ๋ฌธ์ ์ด๋ค. ๋น๋์ค ์ง์ ์๋ต ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด์๋ ์ธ ๊ฐ์ง ๊ณผ์ ๋ฅผ ํด๊ฒฐํ์ฌ์ผ ํ๋ค: (1) ๋ค์ํ ์ง๋ฌธ์ ์๋๋ฅผ ์ดํดํ๊ณ , (2) ์ฃผ์ด์ง ๋น๋์ค์ ๋ค์ํ ์์(e.g. ๋ฌผ์ฒด, ํ๋, ์ธ๊ณผ๊ด๊ณ)๋ฅผ ํ์
ํ์ฌ์ผ ํ๋ฉฐ, (3) ์ธ์ด์ ์๊ฐ ์ ๋ณด ๋ modality ๊ฐ์ ์๊ด๊ด๊ณ๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ์์ฑ๋ ํ์(cross-modal representation)์ ํตํด ์ ๋ต์ ์ถ๋ก ํ์ฌ์ผ ํ๋ค. ๋ฐ๋ผ์ ๋ณธ ํ์๋
ผ๋ฌธ์์๋ ๋์ ์ ๋ณด ๋ฐ ๋ชจ์ ์ ๋ณด์ ๊ธฐ๋ฐํ ๋ ๊ฐ์ง cross-modal representation ์ ์์ฑํ๊ณ , ์ด๋ฅผ ์ง๋ฌธ์ ์๋์ ๋ฐ๋ผ ๊ฐ์คํฉํ๋ ๋์-๋ชจ์ ์๋์ง ๋คํธ์ํฌ๋ฅผ ์ ์ํ๋ค.
์ ์ํ๋ ๋ชจ๋ธ์ ์ธ ๊ฐ์ง์ ๋ชจ๋: ๋์ ๋ชจ๋, ๋ชจ์ ๋ชจ๋, ๋์-๋ชจ์ ์ตํฉ ๋ชจ๋๋ก ๊ตฌ์ฑ๋์ด ์๋ค. ๋์ ๋ชจ๋์์๋ ์ง๋ฌธ๊ณผ ํ๋ ์ ๋ณด๋ฅผ ์ตํฉํ cross-modal representation ์ ์์ฑํ๋ฉฐ, ๋ชจ์ ๋ชจ๋์์๋ ์ฃผ์ด์ง ๋น๋์ค์ ๋ชจ์ ์ธก๋ฉด์ ์ง์คํ์ฌ ํ์์ ์์ฑํ๋ค. ์ต์ข
์ ์ผ๋ก ๋์-๋ชจ์ ์ตํฉ ๋ชจ๋์์ ์ธ์ฝ๋ฉ๋ ๋ ์ ๋ณด๊ฐ ์ง๋ฌธ์ ๋ด์ฉ์ ๊ธฐ๋ฐ์ผ๋ก ์ตํฉ๋๋ค. ์คํ ๊ฒฐ๊ณผ, ์ ์ํ๋ ๋ชจ๋ธ์ ๋๊ท๋ชจ ๋น๋์ค ์ง์ ์๋ต ๋ฐ์ดํฐ์
์ธ TGIF-QA ์ MSVD-QA ์ ๋ํด ์ต์ฒจ๋จ์ ์ฑ๋ฅ์ ๋ณด์๋ค. ๋ณธ ๋
ผ๋ฌธ์์๋ ๋ํ ์ ์ํ๋ ๋ชจ๋ธ์ ์ ์ฑ์ ํ๊ฐ ๊ฒฐ๊ณผ์ ๋ํด์๋ ๋ณด์ฌ์ค๋ค.์ 1 ์ฅ ์ ๋ก 1
์ 1 ์ ์ฐ๊ตฌ์ ๋ฐฐ๊ฒฝ 1
์ 2 ์ ์ฐ๊ตฌ์ ๋ด์ฉ 2
์ 2 ์ฅ ๋ฐฐ๊ฒฝ ์ฐ๊ตฌ 5
์ 1 ์ ์๊ฐ ์ ๋ณด ๊ธฐ๋ฐ ์ง์ ์๋ต ๋ชจ๋ธ๋ค 5
์ 2 ์ ํ๋ ๋ถ๋ฅ ๋ชจ๋ธ๋ค 5
์ 3 ์ ์ดํ
์
๋ฉ์ปค๋์ฆ 6
์ 3 ์ฅ ๋์-๋ชจ์ ์๋์ง ๋คํธ์ํฌ 7
์ 1 ์ ์๊ฐ ๋ฐ ์ธ์ด ํ์ 7
์ 2 ์ ๋์ ๋ฐ ๋ชจ์ ๋ชจ๋ 9
์ 3 ์ ๋์-๋ชจ์ ์ตํฉ ๋ชจ๋ 10
์ 4 ์ ์ ๋ต ์ถ๋ก ๋ฐ ๋ชฉ์ ํจ์ 13
์ 4 ์ฅ ์คํ ๋ฐ ๊ฒฐ๊ณผ 14
์ 1 ์ ํ์ต ๋ฐ์ดํฐ 14
์ 2 ์ ํ์ต ์กฐ๊ฑด 15
์ 3 ์ ์ต์ฒจ๋จ ์ ๊ทผ ๋ฐฉ์๊ณผ์ ๋น๊ต 15
์ 4 ์ ๋ชจ๋ ๋ณ ๊ธฐ์ฌ๋ ํ๊ฐ 17
์ 5 ์ ์ ์ฑ์ ํ๊ฐ 19
์ 5 ์ฅ ๊ฒฐ๋ก ๋ฐ ์ ์ธ 21
์ฐธ๊ณ ๋ฌธํ 22
Abstract 27์
Video question answering supported by a multi-task learning objective
Video Question Answering (VideoQA) concerns the realization of models able to analyze a video, and produce a meaningful answer to visual content-related questions. To encode the given question, word embedding techniques are used to compute a representation of the tokens suitable for neural networks. Yet almost all the works in the literature use the same technique, although recent advancements in NLP brought better solutions. This lack of analysis is a major shortcoming. To address it, in this paper we present a twofold contribution about this inquiry and its relation with question encoding. First of all, we integrate four of the most popular word embedding techniques in three recent VideoQA architectures, and investigate how they influence the performance on two public datasets: EgoVQA and PororoQA. Thanks to the learning process, we show that embeddings carry question type-dependent characteristics. Secondly, to leverage this result, we propose a simple yet effective multi-task learning protocol which uses an auxiliary task defined on the question types. By using the proposed learning strategy, significant improvements are observed in most of the combinations of network architecture and embedding under analysis
Reasoning with Heterogeneous Graph Alignment for Video Question Answering
The dominant video question answering methods are based on fine-grained representation or model-specific attention mechanism. They usually process video and question separately, then feed the representations of different modalities into following late fusion networks. Although these methods use information of one modality to boost the other, they neglect to integrate correlations of both inter- and intra-modality in an uniform module. We propose a deep heterogeneous graph alignment network over the video shots and question words. Furthermore, we explore the network architecture from four steps: representation, fusion, alignment, and reasoning. Within our network, the inter- and intra-modality information can be aligned and interacted simultaneously over the heterogeneous graph and used for cross-modal reasoning. We evaluate our method on three benchmark datasets and conduct extensive ablation study to the effectiveness of the network architecture. Experiments show the network to be superior in quality