103,572 research outputs found

    Action Recognition with Improved Trajectories

    Get PDF
    International audienceRecently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art

    A robust and efficient video representation for action recognition

    Get PDF
    This paper introduces a state-of-the-art video representation and applies it to efficient action recognition and detection. We first propose to improve the popular dense trajectory features by explicit camera motion estimation. More specifically, we extract feature point matches between frames using SURF descriptors and dense optical flow. The matches are used to estimate a homography with RANSAC. To improve the robustness of homography estimation, a human detector is employed to remove outlier matches from the human body as human motion is not constrained by the camera. Trajectories consistent with the homography are considered as due to camera motion, and thus removed. We also use the homography to cancel out camera motion from the optical flow. This results in significant improvement on motion-based HOF and MBH descriptors. We further explore the recent Fisher vector as an alternative feature encoding approach to the standard bag-of-words histogram, and consider different ways to include spatial layout information in these encodings. We present a large and varied set of evaluations, considering (i) classification of short basic actions on six datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that our improved trajectory features significantly outperform previous dense trajectories, and that Fisher vectors are superior to bag-of-words encodings for video recognition tasks. In all three tasks, we show substantial improvements over the state-of-the-art results

    Encoding Feature Maps of CNNs for Action Recognition

    Get PDF
    CVPR International Workshop and Competition on Action Recognition with a Large Number of ClassesWe describe our approach for action classification in the THUMOS Challenge 2015. Our approach is based on two types of features, improved dense trajectories and CNN features. For trajectory features, we extract HOG, HOF, MBHx, and MBHy descriptors and apply Fisher vector encoding. For CNN features, we utilize a recent deep CNN model, VGG19, to capture appearance features and use VLAD encoding to encode/pool convolutional feature maps which shows better performance than average pooling of feature maps and full-connected activation features. After concatenating them, we train a linear SVM classifier for each class in a one-vs-all scheme

    A Revisit of Action Detection using Improved Trajectories

    Get PDF
    In this paper, we revisit trajectory-based action detection in a potent and non-uniform way. Improved trajectories have been proven to be an effective model for motion description in action recognition. In temporal action localization, however, this approach is not efficiently exploited. Trajectory features extracted from uniform video segments result in significant performance degradation due to two reasons: (a) during uniform segmentation, a significant amount of noise is often added to the main action and (b) partial actions can have negative impact in classifier's performance. Since uniform video segmentation seems to be insufficient for this task, we propose a two-step supervised non-uniform segmentation, performed in an online manner. Action proposals are generated using either 2D or 3D data, therefore action classification can be directly performed on them using the standard improved trajectories approach. We experimentally compare our method with other approaches and we show improved performance on a challenging online action detection dataset
    • …
    corecore