Semi-Supervised Learning can be more beneficial for the video domain compared
to images because of its higher annotation cost and dimensionality. Besides,
any video understanding task requires reasoning over both spatial and temporal
dimensions. In order to learn both the static and motion related features for
the semi-supervised action recognition task, existing methods rely on hard
input inductive biases like using two-modalities (RGB and Optical-flow) or
two-stream of different playback rates. Instead of utilizing unlabeled videos
through diverse input streams, we rely on self-supervised video
representations, particularly, we utilize temporally-invariant and
temporally-distinctive representations. We observe that these representations
complement each other depending on the nature of the action. Based on this
observation, we propose a student-teacher semi-supervised learning framework,
TimeBalance, where we distill the knowledge from a temporally-invariant and a
temporally-distinctive teacher. Depending on the nature of the unlabeled video,
we dynamically combine the knowledge of these two teachers based on a novel
temporal similarity-based reweighting scheme. Our method achieves
state-of-the-art performance on three action recognition benchmarks: UCF101,
HMDB51, and Kinetics400. Code: https://github.com/DAVEISHAN/TimeBalanceComment: CVPR-202