156 research outputs found

    Learning Dictionaries of Sparse Codes of 3D Movements of Body Joints for Real-Time Human Activity Understanding

    No full text
    <div><p>Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.</p></div

    Average accuracy as a function of the number of words in the dictionaries on the CAD-60 dataset.

    No full text
    <p>Average accuracy as a function of the number of words in the dictionaries on the CAD-60 dataset.</p

    Performance of our model and other methods on the MSR Action 3D dataset.

    No full text
    <p>Performance of our model and other methods on the MSR Action 3D dataset.</p

    Three confusion matrices for AS1, AS2, AS3 in the MSR Daily Activity 3D dataset.

    No full text
    <p>Three confusion matrices for AS1, AS2, AS3 in the MSR Daily Activity 3D dataset.</p

    Confusion matrices.

    No full text
    <p>(A), Confusion matrix of the model performance on the dataset of 15 scenes. The average accuracy is 82.3%. (B), Confusion matrix of the model performance on the dataset of 8 sports. The average accuracy is 85.8%. In both A and B, the values at the empty matrix elements are 0.</p

    Average accuracy as a function of the number of frames of sub-volumes on the CAD-60 dataset.

    No full text
    <p>Average accuracy as a function of the number of frames of sub-volumes on the CAD-60 dataset.</p

    Values of the parameters of our method.

    No full text
    <p>Values of the parameters of our method.</p

    Four activities in the CAD-60 dataset.

    No full text
    <p>First row: depth images; Second row: joint trajectories.</p

    Coordinate samples from one video. Each column corresponds to one coordinate sub-volume sample.

    No full text
    <p>The “Sample index” axis indicates the indices of all sub-volume samples and the “Coordinate index” axis is the row index of matrix .</p
    • …
    corecore