4 research outputs found
TubeR: Tubelet Transformer for Video Action Detection
We propose TubeR: a simple solution for spatio-temporal video action
detection. Different from existing methods that depend on either an off-line
actor detector or hand-designed actor-positional hypotheses like proposals or
anchors, we propose to directly detect an action tubelet in a video by
simultaneously performing action localization and recognition from a single
representation. TubeR learns a set of tubelet-queries and utilizes a
tubelet-attention module to model the dynamic spatio-temporal nature of a video
clip, which effectively reinforces the model capacity compared to using
actor-positional hypotheses in the spatio-temporal space. For videos containing
transitional states or scene changes, we propose a context aware classification
head to utilize short-term and long-term context to strengthen action
classification, and an action switch regression head for detecting the precise
temporal action extent. TubeR directly produces action tubelets with variable
lengths and even maintains good results for long video clips. TubeR outperforms
the previous state-of-the-art on commonly used action detection datasets AVA,
UCF101-24 and JHMDB51-21
Video Transformers: A Survey
Transformer models have shown great success handling long-range interactions,
making them a promising tool for modeling video. However they lack inductive
biases and scale quadratically with input length. These limitations are further
exacerbated when dealing with the high dimensionality introduced with the
temporal dimension. While there are surveys analyzing the advances of
Transformers for vision, none focus on an in-depth analysis of video-specific
designs. In this survey we analyze main contributions and trends of works
leveraging Transformers to model video. Specifically, we delve into how videos
are handled as input-level first. Then, we study the architectural changes made
to deal with video more efficiently, reduce redundancy, re-introduce useful
inductive biases, and capture long-term temporal dynamics. In addition we
provide an overview of different training regimes and explore effective
self-supervised learning strategies for video. Finally, we conduct a
performance comparison on the most common benchmark for Video Transformers
(i.e., action classification), finding them to outperform 3D ConvNets even with
less computational complexity