74 research outputs found
Human Motion Analysis for Efficient Action Recognition
Automatic understanding of human actions is at the core of several application domains, such as content-based indexing, human-computer interaction, surveillance, and sports video analysis. The recent advances in digital platforms and the exponential growth of video and image data have brought an urgent quest for intelligent frameworks to automatically analyze human motion and predict their corresponding action based on visual data and sensor signals. This thesis presents a collection of methods that targets human action recognition using different action modalities. The first method uses the appearance modality and classifies human actions based on heterogeneous global- and local-based features of scene and humanbody appearances. The second method harnesses 2D and 3D articulated human poses and analyizes the body motion using a discriminative combination of the parts’ velocities, locations, and correlations histograms for action recognition. The third method presents an optimal scheme for combining the probabilistic predictions from different action modalities by solving a constrained quadratic optimization problem. In addition to the action classification task, we present a study that compares the utility of different pose variants in motion analysis for human action recognition. In particular, we compare the recognition performance when 2D and 3D poses are used. Finally, we demonstrate the efficiency of our pose-based method for action recognition in spotting and segmenting motion gestures in real time from a continuous stream of an input video for the recognition of the Italian sign gesture language
Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images
In hyperspectral remote sensing data mining, it is important to take into
account of both spectral and spatial information, such as the spectral
signature, texture feature and morphological property, to improve the
performances, e.g., the image classification accuracy. In a feature
representation point of view, a nature approach to handle this situation is to
concatenate the spectral and spatial features into a single but high
dimensional vector and then apply a certain dimension reduction technique
directly on that concatenated vector before feed it into the subsequent
classifier. However, multiple features from various domains definitely have
different physical meanings and statistical properties, and thus such
concatenation hasn't efficiently explore the complementary properties among
different features, which should benefit for boost the feature
discriminability. Furthermore, it is also difficult to interpret the
transformed results of the concatenated vector. Consequently, finding a
physically meaningful consensus low dimensional feature representation of
original multiple features is still a challenging task. In order to address the
these issues, we propose a novel feature learning framework, i.e., the
simultaneous spectral-spatial feature selection and extraction algorithm, for
hyperspectral images spectral-spatial feature representation and
classification. Specifically, the proposed method learns a latent low
dimensional subspace by projecting the spectral-spatial feature into a common
feature space, where the complementary information has been effectively
exploited, and simultaneously, only the most significant original features have
been transformed. Encouraging experimental results on three public available
hyperspectral remote sensing datasets confirm that our proposed method is
effective and efficient
Online multi-modal robust non-negative dictionary learning for visual tracking
© 2015 Zhang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality
- …