3,537 research outputs found

    Action recognition using the Rf Transform on optical flow images

    Get PDF
    The objective of this paper is the automatic recognition of human actions in video sequences. The use of spatio-temporal features for action recognition has become very popular in recent literature Instead of extracting the spatio-temporal features from the raw video sequence, some authors propose to project the sequence to a single template first. As a contribution we propose the use of several variants of the R transform for projecting the image sequences to templates. The R transform projects the whole sequence to a single image, retaining information concerning movement direction and magnitude. Spatio-temporal features are extracted from the template, they are combined using a bag of words paradigm, and finally fed to a SVM for action classification. The method presented is shown to improve the state-of-art results on the standard Weizmann action datasetPeer ReviewedPostprint (published version

    3D Cylindrical Trace Transform based feature extraction for effective human action classification

    Get PDF
    Human action recognition is currently one of the hottest areas in pattern recognition and machine intelligence. Its applications vary from console and exertion gaming and human computer interaction to automated surveillance and assistive environments. In this paper, we present a novel feature extraction method for action recognition, extending the capabilities of the Trace transform to the 3D domain. We define the notion of a 3D form of the Trace transform on discrete volumes extracted from spatio-temporal image sequences. On a second level, we propose the combination of the novel transform, named 3D Cylindrical Trace Transform, with Selective Spatio-Temporal Interest Points, in a feature extraction scheme called Volumetric Triple Features, which manages to capture the valuable geometrical distribution of interest points in spatio-temporal sequences and to give prominence to their action-discriminant geometrical correlations. The technique provides noise robust, distortion invariant and temporally sensitive features for the classification of human actions. Experiments on different challenging action recognition datasets provided impressive results indicating the efficiency of the proposed transform and of the overall proposed scheme for the specific task

    Temporal segmentation of human actions in video sequences

    Get PDF
    Most of the published works concerning action recognition, usually assume that the action sequences have been previously segmented in time, that is, the action to be recognized starts with the first sequence frame and ends with the last one. However, temporal segmentation of actions in sequences is not an easy task, and is always prone to errors. In this paper, we present a new technique to automatically extract human actions from a video sequence. Our approach presents several contributions. First of all, we use a projection template scheme and find spatio-temporal features and descriptors within the projected surface, rather than extracting them in the whole sequence. For projecting the sequence we use a variant of the R transform, which has never been used before for temporal action segmentation. Instead of projecting the original video sequence, we project its optical flow components, preserving important information about action motion. We test our method on a publicly available action dataset, and the results show that it performs very well segmenting human actions compared with the state-of-the-art methods.Peer ReviewedPostprint (author's final draft

    Radar Human Motion Recognition Using Motion States and Two-Way Classifications

    Full text link
    We perform classification of activities of daily living (ADL) using a Frequency-Modulated Continuous Waveform (FMCW) radar. In particular, we consider contiguous motions that are inseparable in time. Both the micro-Doppler signature and range-map are used to determine transitions from translation (walking) to in-place motions and vice versa, as well as to provide motion onset and the offset times. The possible classes of activities post and prior to the translation motion can be separately handled by forward and background classifiers. The paper describes ADL in terms of states and transitioning actions, and sets a framework to deal with separable and inseparable contiguous motions. It is shown that considering only the physically possible classes of motions stemming from the current motion state improves classification rates compared to incorporating all ADL for any given time

    Fusing R features and local features with context-aware kernels for action recognition

    Get PDF
    The performance of action recognition in video sequences depends significantly on the representation of actions and the similarity measurement between the representations. In this paper, we combine two kinds of features extracted from the spatio-temporal interest points with context-aware kernels for action recognition. For the action representation, local cuboid features extracted around interest points are very popular using a Bag of Visual Words (BOVW) model. Such representations, however, ignore potentially valuable information about the global spatio-temporal distribution of interest points. We propose a new global feature to capture the detailed geometrical distribution of interest points. It is calculated by using the 3D R transform which is defined as an extended 3D discrete Radon transform, followed by the application of a two-directional two-dimensional principal component analysis. For the similarity measurement, we model a video set as an optimized probabilistic hypergraph and propose a context-aware kernel to measure high order relationships among videos. The context-aware kernel is more robust to the noise and outliers in the data than the traditional context-free kernel which just considers the pairwise relationships between videos. The hyperedges of the hypergraph are constructed based on a learnt Mahalanobis distance metric. Any disturbing information from other classes is excluded from each hyperedge. Finally, a multiple kernel learning algorithm is designed by integrating the l2 norm regularization into a linear SVM classifier to fuse the R feature and the BOVW representation for action recognition. Experimental results on several datasets demonstrate the effectiveness of the proposed approach for action recognition
    • …
    corecore