13,502 research outputs found
Excitation Backprop for RNNs
Deep models are state-of-the-art for many vision tasks including video action
recognition and video captioning. Models are trained to caption or classify
activity in videos, but little is known about the evidence used to make such
decisions. Grounding decisions made by deep networks has been studied in
spatial visual content, giving more insight into model predictions for images.
However, such studies are relatively lacking for models of spatiotemporal
visual content - videos. In this work, we devise a formulation that
simultaneously grounds evidence in space and time, in a single pass, using
top-down saliency. We visualize the spatiotemporal cues that contribute to a
deep model's classification/captioning output using the model's internal
representation. Based on these spatiotemporal cues, we are able to localize
segments within a video that correspond with a specific action, or phrase from
a caption, without explicitly optimizing/training for these tasks.Comment: CVPR 2018 Camera Ready Versio
Collaborative Spatio-temporal Feature Learning for Video Action Recognition
Spatio-temporal feature learning is of central importance for action
recognition in videos. Existing deep neural network models either learn spatial
and temporal features independently (C2D) or jointly with unconstrained
parameters (C3D). In this paper, we propose a novel neural operation which
encodes spatio-temporal features collaboratively by imposing a weight-sharing
constraint on the learnable parameters. In particular, we perform 2D
convolution along three orthogonal views of volumetric video data,which learns
spatial appearance and temporal motion cues respectively. By sharing the
convolution kernels of different views, spatial and temporal features are
collaboratively learned and thus benefit from each other. The complementary
features are subsequently fused by a weighted summation whose coefficients are
learned end-to-end. Our approach achieves state-of-the-art performance on
large-scale benchmarks and won the 1st place in the Moments in Time Challenge
2018. Moreover, based on the learned coefficients of different views, we are
able to quantify the contributions of spatial and temporal features. This
analysis sheds light on interpretability of the model and may also guide the
future design of algorithm for video recognition.Comment: CVPR 201
- …