3,184 research outputs found

    Discovery and recognition of motion primitives in human activities

    Get PDF
    We present a novel framework for the automatic discovery and recognition of motion primitives in videos of human activities. Given the 3D pose of a human in a video, human motion primitives are discovered by optimizing the `motion flux', a quantity which captures the motion variation of a group of skeletal joints. A normalization of the primitives is proposed in order to make them invariant with respect to a subject anatomical variations and data sampling rate. The discovered primitives are unknown and unlabeled and are unsupervisedly collected into classes via a hierarchical non-parametric Bayes mixture model. Once classes are determined and labeled they are further analyzed for establishing models for recognizing discovered primitives. Each primitive model is defined by a set of learned parameters. Given new video data and given the estimated pose of the subject appearing on the video, the motion is segmented into primitives, which are recognized with a probability given according to the parameters of the learned models. Using our framework we build a publicly available dataset of human motion primitives, using sequences taken from well-known motion capture datasets. We expect that our framework, by providing an objective way for discovering and categorizing human motion, will be a useful tool in numerous research fields including video analysis, human inspired motion generation, learning by demonstration, intuitive human-robot interaction, and human behavior analysis

    Human Activity Recognition using Max-Min Skeleton-based Features and Key Poses

    Get PDF
    Human activity recognition is still a very challenging research area, due to the inherently complex temporal and spatial patterns that characterize most human activities. This paper proposes a human activity recognition framework based on random forests, where each activity is classified requiring few training examples (i.e. no frame-by-frame activity classification). In a first approach, a simple mechanism that divides each action sequence into a fixed-size window is employed, where max-min skeleton-based features are extracted. In the second approach, each window is delimited by a pair of automatically detected key poses, where static and max-min dynamic features are extracted, based on the determined activity example. Both approaches are evaluated using the Cornell Activity Dataset [1], obtaining relevant overall average results, considering that these approaches are fast to train and require just a few training examples. These characteristics suggest that the proposed framework can beuseful for real-time applications, where the activities are typicallywell distinctive and little training time is required, or to be integrated in larger and sophisticated systems, for a first quick impression/learning of certain activitie

    Articulated motion and deformable objects

    Get PDF
    This guest editorial introduces the twenty two papers accepted for this Special Issue on Articulated Motion and Deformable Objects (AMDO). They are grouped into four main categories within the field of AMDO: human motion analysis (action/gesture), human pose estimation, deformable shape segmentation, and face analysis. For each of the four topics, a survey of the recent developments in the field is presented. The accepted papers are briefly introduced in the context of this survey. They contribute novel methods, algorithms with improved performance as measured on benchmarking datasets, as well as two new datasets for hand action detection and human posture analysis. The special issue should be of high relevance to the reader interested in AMDO recognition and promote future research directions in the field

    RGB-D-based Action Recognition Datasets: A Survey

    Get PDF
    Human action recognition from RGB-D (Red, Green, Blue and Depth) data has attracted increasing attention since the first work reported in 2010. Over this period, many benchmark datasets have been created to facilitate the development and evaluation of new algorithms. This raises the question of which dataset to select and how to use it in providing a fair and objective comparative evaluation against state-of-the-art methods. To address this issue, this paper provides a comprehensive review of the most commonly used action recognition related RGB-D video datasets, including 27 single-view datasets, 10 multi-view datasets, and 7 multi-person datasets. The detailed information and analysis of these datasets is a useful resource in guiding insightful selection of datasets for future research. In addition, the issues with current algorithm evaluation vis-\'{a}-vis limitations of the available datasets and evaluation protocols are also highlighted; resulting in a number of recommendations for collection of new datasets and use of evaluation protocols

    A human activity recognition framework using max-min features and key poses with differential evolution random forests classifier

    Get PDF
    This paper presents a novel framework for human daily activity recognition that is intended to rely on few training examples evidencing fast training times, making it suitable for real-time applications. The proposed framework starts with a feature extraction stage, where the division of each activity into actions of variable-size, based on key poses, is performed. Each action window is delimited by two consecutive and automatically identified key poses, where static (i.e. geometrical) and max-min dynamic (i.e. temporal) features are extracted. These features are first used to train a random forest (RF) classifier which was tested using the CAD-60 dataset, obtaining relevant overall average results. Then in a second stage, an extension of the RF is proposed, where the differential evolution meta-heuristic algorithm is used, as splitting node methodology. The main advantage of its inclusion is the fact that the differential evolution random forest has no thresholds to tune, but rather a few adjustable parameters with well-defined behavior

    A 3D Human Posture Approach for Activity Recognition Based on Depth Camera

    Get PDF
    Human activity recognition plays an important role in the context of Ambient Assisted Living (AAL), providing useful tools to improve people quality of life. This work presents an activity recognition algorithm based on the extraction of skeleton joints from a depth camera. The system describes an activity using a set of few and basic postures extracted by means of the X-means clustering algorithm. A multi-class Support Vector Machine, trained with the Sequential Minimal Optimization is employed to perform the classification. The system is evaluated on two public datasets for activity recognition which have different skeleton models, the CAD-60 with 15 joints and the TST with 25 joints. The proposed approach achieves precision/recall performances of 99.8 % on CAD-60 and 97.2 %/91.7 % on TST. The results are promising for an applied use in the context of AAL
    corecore