24 research outputs found
Heterogeneous Generative Knowledge Distillation with Masked Image Modeling
Small CNN-based models usually require transferring knowledge from a large
model before they are deployed in computationally resource-limited edge
devices. Masked image modeling (MIM) methods achieve great success in various
visual tasks but remain largely unexplored in knowledge distillation for
heterogeneous deep models. The reason is mainly due to the significant
discrepancy between the Transformer-based large model and the CNN-based small
network. In this paper, we develop the first Heterogeneous Generative Knowledge
Distillation (H-GKD) based on MIM, which can efficiently transfer knowledge
from large Transformer models to small CNN-based models in a generative
self-supervised fashion. Our method builds a bridge between Transformer-based
models and CNNs by training a UNet-style student with sparse convolution, which
can effectively mimic the visual representation inferred by a teacher over
masked modeling. Our method is a simple yet effective learning paradigm to
learn the visual representation and distribution of data from heterogeneous
teacher models, which can be pre-trained using advanced generative methods.
Extensive experiments show that it adapts well to various models and sizes,
consistently achieving state-of-the-art performance in image classification,
object detection, and semantic segmentation tasks. For example, in the Imagenet
1K dataset, H-GKD improves the accuracy of Resnet50 (sparse) from 76.98% to
80.01%
Self-Supervised Video Representation Learning with Space-Time Cubic Puzzles
Self-supervised tasks such as colorization, inpainting and zigsaw puzzle have
been utilized for visual representation learning for still images, when the
number of labeled images is limited or absent at all. Recently, this worthwhile
stream of study extends to video domain where the cost of human labeling is
even more expensive. However, the most of existing methods are still based on
2D CNN architectures that can not directly capture spatio-temporal information
for video applications. In this paper, we introduce a new self-supervised task
called as \textit{Space-Time Cubic Puzzles} to train 3D CNNs using large scale
video dataset. This task requires a network to arrange permuted 3D
spatio-temporal crops. By completing \textit{Space-Time Cubic Puzzles}, the
network learns both spatial appearance and temporal relation of video frames,
which is our final goal. In experiments, we demonstrate that our learned 3D
representation is well transferred to action recognition tasks, and outperforms
state-of-the-art 2D CNN-based competitors on UCF101 and HMDB51 datasets.Comment: Accepted to AAAI 201