69 research outputs found
Generic Tubelet Proposals for Action Localization
We develop a novel framework for action localization in videos. We propose
the Tube Proposal Network (TPN), which can generate generic, class-independent,
video-level tubelet proposals in videos. The generated tubelet proposals can be
utilized in various video analysis tasks, including recognizing and localizing
actions in videos. In particular, we integrate these generic tubelet proposals
into a unified temporal deep network for action classification. Compared with
other methods, our generic tubelet proposal method is accurate, general, and is
fully differentiable under a smoothL1 loss function. We demonstrate the
performance of our algorithm on the standard UCF-Sports, J-HMDB21, and UCF-101
datasets. Our class-independent TPN outperforms other tubelet generation
methods, and our unified temporal deep network achieves state-of-the-art
localization results on all three datasets
DAP3D-Net: Where, What and How Actions Occur in Videos?
Action parsing in videos with complex scenes is an interesting but
challenging task in computer vision. In this paper, we propose a generic 3D
convolutional neural network in a multi-task learning manner for effective Deep
Action Parsing (DAP3D-Net) in videos. Particularly, in the training phase,
action localization, classification and attributes learning can be jointly
optimized on our appearancemotion data via DAP3D-Net. For an upcoming test
video, we can describe each individual action in the video simultaneously as:
Where the action occurs, What the action is and How the action is performed. To
well demonstrate the effectiveness of the proposed DAP3D-Net, we also
contribute a new Numerous-category Aligned Synthetic Action dataset, i.e.,
NASA, which consists of 200; 000 action clips of more than 300 categories and
with 33 pre-defined action attributes in two hierarchical levels (i.e.,
low-level attributes of basic body part movements and high-level attributes
related to action motion). We learn DAP3D-Net using the NASA dataset and then
evaluate it on our collected Human Action Understanding (HAU) dataset.
Experimental results show that our approach can accurately localize, categorize
and describe multiple actions in realistic videos
VideoCapsuleNet: A Simplified Network for Action Detection
The recent advances in Deep Convolutional Neural Networks (DCNNs) have shown
extremely good results for video human action classification, however, action
detection is still a challenging problem. The current action detection
approaches follow a complex pipeline which involves multiple tasks such as tube
proposals, optical flow, and tube classification. In this work, we present a
more elegant solution for action detection based on the recently developed
capsule network. We propose a 3D capsule network for videos, called
VideoCapsuleNet: a unified network for action detection which can jointly
perform pixel-wise action segmentation along with action classification. The
proposed network is a generalization of capsule network from 2D to 3D, which
takes a sequence of video frames as input. The 3D generalization drastically
increases the number of capsules in the network, making capsule routing
computationally expensive. We introduce capsule-pooling in the convolutional
capsule layer to address this issue which makes the voting algorithm tractable.
The routing-by-agreement in the network inherently models the action
representations and various action characteristics are captured by the
predicted capsules. This inspired us to utilize the capsules for action
localization and the class-specific capsules predicted by the network are used
to determine a pixel-wise localization of actions. The localization is further
improved by parameterized skip connections with the convolutional capsule
layers and the network is trained end-to-end with a classification as well as
localization loss. The proposed network achieves sate-of-the-art performance on
multiple action detection datasets including UCF-Sports, J-HMDB, and UCF-101
(24 classes) with an impressive ~20% improvement on UCF-101 and ~15%
improvement on J-HMDB in terms of v-mAP scores
- …