Semi-supervised action recognition is a challenging but critical task due to
the high cost of video annotations. Existing approaches mainly use
convolutional neural networks, yet current revolutionary vision transformer
models have been less explored. In this paper, we investigate the use of
transformer models under the SSL setting for action recognition. To this end,
we introduce SVFormer, which adopts a steady pseudo-labeling framework (ie,
EMA-Teacher) to cope with unlabeled video samples. While a wide range of data
augmentations have been shown effective for semi-supervised image
classification, they generally produce limited results for video recognition.
We therefore introduce a novel augmentation strategy, Tube TokenMix, tailored
for video data where video clips are mixed via a mask with consistent masked
tokens over the temporal axis. In addition, we propose a temporal warping
augmentation to cover the complex temporal variation in videos, which stretches
selected frames to various temporal durations in the clip. Extensive
experiments on three datasets Kinetics-400, UCF-101, and HMDB-51 verify the
advantage of SVFormer. In particular, SVFormer outperforms the state-of-the-art
by 31.5% with fewer training epochs under the 1% labeling rate of Kinetics-400.
Our method can hopefully serve as a strong benchmark and encourage future
search on semi-supervised action recognition with Transformer networks