2 research outputs found
Augmented 2D-TAN: A Two-stage Approach for Human-centric Spatio-Temporal Video Grounding
We propose an effective two-stage approach to tackle the problem of
language-based Human-centric Spatio-Temporal Video Grounding (HC-STVG) task. In
the first stage, we propose an Augmented 2D Temporal Adjacent Network
(Augmented 2D-TAN) to temporally ground the target moment corresponding to the
given description. Primarily, we improve the original 2D-TAN from two aspects:
First, a temporal context-aware Bi-LSTM Aggregation Module is developed to
aggregate clip-level representations, replacing the original max-pooling.
Second, we propose to employ Random Concatenation Augmentation (RCA) mechanism
during the training phase. In the second stage, we use pretrained MDETR model
to generate per-frame bounding boxes via language query, and design a set of
hand-crafted rules to select the best matching bounding box outputted by MDETR
for each frame within the grounded moment.Comment: Best Paper Award at the 3rd Person in Context (PIC) Challenge CVPR
Workshop 202