390 research outputs found
An Evaluation Framework and Database for MoCap-Based Gait Recognition Methods
As a contribution to reproducible research, this paper presents a framework and a database to improve the development, evaluation and comparison of methods for gait recognition from Motion Capture (MoCap) data. The evaluation framework provides implementation details and source codes of state-of-the-art human-interpretable geometric features as well as our own approaches where gait features are learned by a modification of Fisher's Linear Discriminant Analysis with the Maximum Margin Criterion, and by a combination of Principal Component Analysis and Linear Discriminant Analysis. It includes a description and source codes of a mechanism for evaluating four class separability coefficients of feature space and four rank-based classifier performance metrics. This framework also contains a tool for learning a custom classifier and for classifying a custom query on a custom gallery. We provide an experimental database along with source codes for its extraction from the general CMU MoCap database
Gait Recognition from Motion Capture Data
Gait recognition from motion capture data, as a pattern classification
discipline, can be improved by the use of machine learning. This paper
contributes to the state-of-the-art with a statistical approach for extracting
robust gait features directly from raw data by a modification of Linear
Discriminant Analysis with Maximum Margin Criterion. Experiments on the CMU
MoCap database show that the suggested method outperforms thirteen relevant
methods based on geometric features and a method to learn the features by a
combination of Principal Component Analysis and Linear Discriminant Analysis.
The methods are evaluated in terms of the distribution of biometric templates
in respective feature spaces expressed in a number of class separability
coefficients and classification metrics. Results also indicate a high
portability of learned features, that means, we can learn what aspects of walk
people generally differ in and extract those as general gait features.
Recognizing people without needing group-specific features is convenient as
particular people might not always provide annotated learning data. As a
contribution to reproducible research, our evaluation framework and database
have been made publicly available. This research makes motion capture
technology directly applicable for human recognition.Comment: Preprint. Full paper accepted at the ACM Transactions on Multimedia
Computing, Communications, and Applications (TOMM), special issue on
Representation, Analysis and Recognition of 3D Humans. 18 pages. arXiv admin
note: substantial text overlap with arXiv:1701.00995, arXiv:1609.04392,
arXiv:1609.0693
You Are How You Walk: Uncooperative MoCap Gait Identification for Video Surveillance with Incomplete and Noisy Data
This work offers a design of a video surveillance system based on a soft biometric -- gait identification from MoCap data. The main focus is on two substantial issues of the video surveillance scenario: (1) the walkers do not cooperate in providing learning data to establish their identities and (2) the data are often noisy or incomplete. We show that only a few examples of human gait cycles are required to learn a projection of raw MoCap data onto a low-dimensional sub-space where the identities are well separable. Latent features learned by Maximum Margin Criterion (MMC) method discriminate better than any collection of geometric features. The MMC method is also highly robust to noisy data and works properly even with only a fraction of joints tracked. The overall workflow of the design is directly applicable for a day-to-day operation based on the available MoCap technology and algorithms for gait analysis. In the concept we introduce, a walker's identity is represented by a cluster of gait data collected at their incidents within the surveillance system: They are how they walk
Gait recognition and understanding based on hierarchical temporal memory using 3D gait semantic folding
Gait recognition and understanding systems have shown a wide-ranging application prospect. However, their use of unstructured data from image and video has affected their performance, e.g., they are easily influenced by multi-views, occlusion, clothes, and object carrying conditions. This paper addresses these problems using a realistic 3-dimensional (3D) human structural data and sequential pattern learning framework with top-down attention modulating mechanism based on Hierarchical Temporal Memory (HTM). First, an accurate 2-dimensional (2D) to 3D human body pose and shape semantic parameters estimation method is proposed, which exploits the advantages of an instance-level body parsing model and a virtual dressing method. Second, by using gait semantic folding, the estimated body parameters are encoded using a sparse 2D matrix to construct the structural gait semantic image. In order to achieve time-based gait recognition, an HTM Network is constructed to obtain the sequence-level gait sparse distribution representations (SL-GSDRs). A top-down attention mechanism is introduced to deal with various conditions including multi-views by refining the SL-GSDRs, according to prior knowledge. The proposed gait learning model not only aids gait recognition tasks to overcome the difficulties in real application scenarios but also provides the structured gait semantic images for visual cognition. Experimental analyses on CMU MoBo, CASIA B, TUM-IITKGP, and KY4D datasets show a significant performance gain in terms of accuracy and robustness
Walker-Independent Features for Gait Recognition from Motion Capture Data
MoCap-based human identification, as a pattern recognition discipline, can be optimized using a machine learning approach. Yet in some applications such as video surveillance new identities can appear on the fly and labeled data for all encountered people may not always be available. This work introduces the concept of learning walker-independent gait features directly from raw joint coordinates by a modification of the Fisher’s Linear Discriminant Analysis with Maximum Margin Criterion. Our new approach shows not only that these features can discriminate different people than who they are learned on, but also that the number of learning identities can be much smaller than the number of walkers encountered in the real operation
Gait Data Augmentation using Physics-Based Biomechanical Simulation
This paper focuses on addressing the problem of data scarcity for gait
analysis. Standard augmentation methods may produce gait sequences that are not
consistent with the biomechanical constraints of human walking. To address this
issue, we propose a novel framework for gait data augmentation by using
OpenSIM, a physics-based simulator, to synthesize biomechanically plausible
walking sequences. The proposed approach is validated by augmenting the WBDS
and CASIA-B datasets and then training gait-based classifiers for 3D gender
gait classification and 2D gait person identification respectively.
Experimental results indicate that our augmentation approach can improve the
performance of model-based gait classifiers and deliver state-of-the-art
results for gait-based person identification with an accuracy of up to 96.11%
on the CASIA-B dataset.Comment: 30 pages including references, 5 Figures submitted to ESW
Selecting the motion ground truth for loose-fitting wearables: benchmarking optical MoCap methods
To help smart wearable researchers choose the optimal ground truth methods
for motion capturing (MoCap) for all types of loose garments, we present a
benchmark, DrapeMoCapBench (DMCB), specifically designed to evaluate the
performance of optical marker-based and marker-less MoCap. High-cost
marker-based MoCap systems are well-known as precise golden standards. However,
a less well-known caveat is that they require skin-tight fitting markers on
bony areas to ensure the specified precision, making them questionable for
loose garments. On the other hand, marker-less MoCap methods powered by
computer vision models have matured over the years, which have meager costs as
smartphone cameras would suffice. To this end, DMCB uses large real-world
recorded MoCap datasets to perform parallel 3D physics simulations with a wide
range of diversities: six levels of drape from skin-tight to extremely draped
garments, three levels of motions and six body type - gender combinations to
benchmark state-of-the-art optical marker-based and marker-less MoCap methods
to identify the best-performing method in different scenarios. In assessing the
performance of marker-based and low-cost marker-less MoCap for casual loose
garments both approaches exhibit significant performance loss (>10cm), but for
everyday activities involving basic and fast motions, marker-less MoCap
slightly outperforms marker-based MoCap, making it a favorable and
cost-effective choice for wearable studies
- …