2 research outputs found

    Order-aware convolutional pooling for video based action recognition

    No full text
    Most video based action recognition approaches create the video-level representation by temporally pooling the features extracted at every frame. The pooling methods they adopt, however, usually completely or partially ignore the dynamic information contained in the temporal domain, which may undermine the discriminative power of the resulting video representation since the video sequence order could unveil the evolution of a specific event or action. To overcome this drawback and explore the importance of incorporating the temporal order information, in this paper we propose a novel temporal pooling approach to aggregate the frame-level features. Inspired by the capacity of Convolutional Neural Networks (CNN) in making use of the internal structure of images for information abstraction, we propose to apply the temporal convolution operation to the frame-level representations to extract the dynamic information. However, directly implementing this idea on the original high-dimensional feature will result in parameter explosion. To handle this issue, we propose to treat the temporal evolution of the feature value at each feature dimension as a 1D signal and learn a unique convolutional filter bank for each 1D signal. By conducting experiments on three challenging video-based action recognition datasets, HMDB51, UCF101, and Hollywood2, we demonstrate that the proposed method is superior to the conventional pooling methods.Peng Wang, Lingqiao Liu, Chunhua Shen, Heng Tao She
    corecore