17 research outputs found
Learning Spatio-Temporal Representation with Local and Global Diffusion
Convolutional Neural Networks (CNN) have been regarded as a powerful class of
models for visual recognition problems. Nevertheless, the convolutional filters
in these networks are local operations while ignoring the large-range
dependency. Such drawback becomes even worse particularly for video
recognition, since video is an information-intensive media with complex
temporal variations. In this paper, we present a novel framework to boost the
spatio-temporal representation learning by Local and Global Diffusion (LGD).
Specifically, we construct a novel neural network architecture that learns the
local and global representations in parallel. The architecture is composed of
LGD blocks, where each block updates local and global features by modeling the
diffusions between these two representations. Diffusions effectively interact
two aspects of information, i.e., localized and holistic, for more powerful way
of representation learning. Furthermore, a kernelized classifier is introduced
to combine the representations from two aspects for video recognition. Our LGD
networks achieve clear improvements on the large-scale Kinetics-400 and
Kinetics-600 video classification datasets against the best competitors by 3.5%
and 0.7%. We further examine the generalization of both the global and local
representations produced by our pre-trained LGD networks on four different
benchmarks for video action recognition and spatio-temporal action detection
tasks. Superior performances over several state-of-the-art techniques on these
benchmarks are reported. Code is available at:
https://github.com/ZhaofanQiu/local-and-global-diffusion-networks.Comment: CVPR 201