1,646,238 research outputs found
Compressed Video Action Recognition
Training robust deep video representations has proven to be much more
challenging than learning deep image representations. This is in part due to
the enormous size of raw video streams and the high temporal redundancy; the
true and interesting signal is often drowned in too much irrelevant data.
Motivated by that the superfluous information can be reduced by up to two
orders of magnitude by video compression (using H.264, HEVC, etc.), we propose
to train a deep network directly on the compressed video.
This representation has a higher information density, and we found the
training to be easier. In addition, the signals in a compressed video provide
free, albeit noisy, motion information. We propose novel techniques to use them
effectively. Our approach is about 4.6 times faster than Res3D and 2.7 times
faster than ResNet-152. On the task of action recognition, our approach
outperforms all the other methods on the UCF-101, HMDB-51, and Charades
dataset.Comment: CVPR 2018 (Selected for spotlight presentation
Action Recognition by Hierarchical Mid-level Action Elements
Realistic videos of human actions exhibit rich spatiotemporal structures at
multiple levels of granularity: an action can always be decomposed into
multiple finer-grained elements in both space and time. To capture this
intuition, we propose to represent videos by a hierarchy of mid-level action
elements (MAEs), where each MAE corresponds to an action-related spatiotemporal
segment in the video. We introduce an unsupervised method to generate this
representation from videos. Our method is capable of distinguishing
action-related segments from background segments and representing actions at
multiple spatiotemporal resolutions. Given a set of spatiotemporal segments
generated from the training data, we introduce a discriminative clustering
algorithm that automatically discovers MAEs at multiple levels of granularity.
We develop structured models that capture a rich set of spatial, temporal and
hierarchical relations among the segments, where the action label and multiple
levels of MAE labels are jointly inferred. The proposed model achieves
state-of-the-art performance in multiple action recognition benchmarks.
Moreover, we demonstrate the effectiveness of our model in real-world
applications such as action recognition in large-scale untrimmed videos and
action parsing
Two-Stream Action Recognition-Oriented Video Super-Resolution
We study the video super-resolution (SR) problem for facilitating video
analytics tasks, e.g. action recognition, instead of for visual quality. The
popular action recognition methods based on convolutional networks, exemplified
by two-stream networks, are not directly applicable on video of low spatial
resolution. This can be remedied by performing video SR prior to recognition,
which motivates us to improve the SR procedure for recognition accuracy.
Tailored for two-stream action recognition networks, we propose two video SR
methods for the spatial and temporal streams respectively. On the one hand, we
observe that regions with action are more important to recognition, and we
propose an optical-flow guided weighted mean-squared-error loss for our
spatial-oriented SR (SoSR) network to emphasize the reconstruction of moving
objects. On the other hand, we observe that existing video SR methods incur
temporal discontinuity between frames, which also worsens the recognition
accuracy, and we propose a siamese network for our temporal-oriented SR (ToSR)
training that emphasizes the temporal continuity between consecutive frames. We
perform experiments using two state-of-the-art action recognition networks and
two well-known datasets--UCF101 and HMDB51. Results demonstrate the
effectiveness of our proposed SoSR and ToSR in improving recognition accuracy.Comment: Accepted to ICCV 2019. Code:
https://github.com/AlanZhang1995/TwoStreamS
- …