144,367 research outputs found
Multi-stage Factorized Spatio-Temporal Representation for RGB-D Action and Gesture Recognition
RGB-D action and gesture recognition remain an interesting topic in
human-centered scene understanding, primarily due to the multiple granularities
and large variation in human motion. Although many RGB-D based action and
gesture recognition approaches have demonstrated remarkable results by
utilizing highly integrated spatio-temporal representations across multiple
modalities (i.e., RGB and depth data), they still encounter several challenges.
Firstly, vanilla 3D convolution makes it hard to capture fine-grained motion
differences between local clips under different modalities. Secondly, the
intricate nature of highly integrated spatio-temporal modeling can lead to
optimization difficulties. Thirdly, duplicate and unnecessary information can
add complexity and complicate entangled spatio-temporal modeling. To address
the above issues, we propose an innovative heuristic architecture called
Multi-stage Factorized Spatio-Temporal (MFST) for RGB-D action and gesture
recognition. The proposed MFST model comprises a 3D Central Difference
Convolution Stem (CDC-Stem) module and multiple factorized spatio-temporal
stages. The CDC-Stem enriches fine-grained temporal perception, and the
multiple hierarchical spatio-temporal stages construct dimension-independent
higher-order semantic primitives. Specifically, the CDC-Stem module captures
bottom-level spatio-temporal features and passes them successively to the
following spatio-temporal factored stages to capture the hierarchical spatial
and temporal features through the Multi- Scale Convolution and Transformer
(MSC-Trans) hybrid block and Weight-shared Multi-Scale Transformer (WMS-Trans)
block. The seamless integration of these innovative designs results in a robust
spatio-temporal representation that outperforms state-of-the-art approaches on
RGB-D action and gesture recognition datasets.Comment: ACM MM'2
Action recognition in depth videos using nonparametric probabilistic graphical models
Action recognition involves automatically labelling videos that contain human motion with action classes. It has applications in diverse areas such as smart surveillance, human computer interaction and content retrieval. The recent advent of depth sensing technology that produces depth image sequences has offered opportunities to solve the challenging action recognition problem. The depth images facilitate robust estimation of a human skeletonās 3D joint positions and a high level action can be inferred from a sequence of these joint positions.
A natural way to model a sequence of joint positions is to use a graphical model that describes probabilistic dependencies between the observed joint positions and some hidden state variables. A problem with these models is that the number of hidden states must be fixed a priori even though for many applications this number is not known in advance. This thesis proposes nonparametric variants of graphical models with the number of hidden states automatically inferred from data. The inference is performed in a full Bayesian setting by using the Dirichlet Process as a prior over the modelās infinite dimensional parameter space.
This thesis describes three original constructions of nonparametric graphical models that are applied in the classification of actions in depth videos. Firstly, the action classes are represented by a Hidden Markov Model (HMM) with an unbounded number of hidden states. The formulation enables information sharing and discriminative learning of parameters. Secondly, a hierarchical HMM with an unbounded number of actions and poses is used to represent activities. The construction produces a simplified model for activity classification by using logistic regression to capture the relationship between action states and activity labels. Finally, the action classes are modelled by a Hidden Conditional Random Field (HCRF) with the number of intermediate hidden states learned from data. Tractable inference procedures based on Markov Chain Monte Carlo (MCMC) techniques are derived for all these constructions. Experiments with multiple benchmark datasets confirm the efficacy of the proposed approaches for action recognition
Multimodal Multipart Learning for Action Recognition in Depth Videos
The articulated and complex nature of human actions makes the task of action
recognition difficult. One approach to handle this complexity is dividing it to
the kinetics of body parts and analyzing the actions based on these partial
descriptors. We propose a joint sparse regression based learning method which
utilizes the structured sparsity to model each action as a combination of
multimodal features from a sparse set of body parts. To represent dynamics and
appearance of parts, we employ a heterogeneous set of depth and skeleton based
features. The proper structure of multimodal multipart features are formulated
into the learning framework via the proposed hierarchical mixed norm, to
regularize the structured features of each part and to apply sparsity between
them, in favor of a group feature selection. Our experimental results expose
the effectiveness of the proposed learning method in which it outperforms other
methods in all three tested datasets while saturating one of them by achieving
perfect accuracy
Going Deeper into Action Recognition: A Survey
Understanding human actions in visual data is tied to advances in
complementary research areas including object recognition, human dynamics,
domain adaptation and semantic segmentation. Over the last decade, human action
analysis evolved from earlier schemes that are often limited to controlled
environments to nowadays advanced solutions that can learn from millions of
videos and apply to almost all daily activities. Given the broad range of
applications from video surveillance to human-computer interaction, scientific
milestones in action recognition are achieved more rapidly, eventually leading
to the demise of what used to be good in a short time. This motivated us to
provide a comprehensive review of the notable steps taken towards recognizing
human actions. To this end, we start our discussion with the pioneering methods
that use handcrafted representations, and then, navigate into the realm of deep
learning based approaches. We aim to remain objective throughout this survey,
touching upon encouraging improvements as well as inevitable fallbacks, in the
hope of raising fresh questions and motivating new research directions for the
reader
Real-time motion data annotation via action string
Even though there is an explosive growth of motion capture data, there is still a lack of efficient and reliable methods to automatically annotate all the motions in a database. Moreover, because of the popularity of mocap devices in home entertainment systems, real-time human motion annotation or recognition becomes more and more imperative. This paper presents a new motion annotation method that achieves both the aforementioned two targets at the same time. It uses a probabilistic pose feature based on the Gaussian Mixture Model to represent each pose. After training a clustered pose feature model, a motion clip could be represented as an action string. Then, a dynamic programming-based string matching method is introduced to compare the differences between action strings. Finally, in order to achieve the real-time target, we construct a hierarchical action string structure to quickly label each given action string. The experimental results demonstrate the efficacy and efficiency of our method
- ā¦