419,143 research outputs found
FLAG3D: A 3D Fitness Activity Dataset with Language Instruction
With the continuously thriving popularity around the world, fitness activity
analytic has become an emerging research topic in computer vision. While a
variety of new tasks and algorithms have been proposed recently, there are
growing hunger for data resources involved in high-quality data, fine-grained
labels, and diverse environments. In this paper, we present FLAG3D, a
large-scale 3D fitness activity dataset with language instruction containing
180K sequences of 60 categories. FLAG3D features the following three aspects:
1) accurate and dense 3D human pose captured from advanced MoCap system to
handle the complex activity and large movement, 2) detailed and professional
language instruction to describe how to perform a specific activity, 3)
versatile video resources from a high-tech MoCap system, rendering software,
and cost-effective smartphones in natural environments. Extensive experiments
and in-depth analysis show that FLAG3D contributes great research value for
various challenges, such as cross-domain human action recognition, dynamic
human mesh recovery, and language-guided human action generation. Our dataset
and source code will be publicly available at
https://andytang15.github.io/FLAG3D
Learning Scene Flow With Skeleton Guidance For 3D Action Recognition
Among the existing modalities for 3D action recognition, 3D flow has been
poorly examined, although conveying rich motion information cues for human
actions. Presumably, its susceptibility to noise renders it intractable, thus
challenging the learning process within deep models. This work demonstrates the
use of 3D flow sequence by a deep spatiotemporal model and further proposes an
incremental two-level spatial attention mechanism, guided from skeleton domain,
for emphasizing motion features close to the body joint areas and according to
their informativeness. Towards this end, an extended deep skeleton model is
also introduced to learn the most discriminant action motion dynamics, so as to
estimate an informativeness score for each joint. Subsequently, a late fusion
scheme is adopted between the two models for learning the high level
cross-modal correlations. Experimental results on the currently largest and
most challenging dataset NTU RGB+D, demonstrate the effectiveness of the
proposed approach, achieving state-of-the-art results.Comment: 18 pages, 3 figures, 3 tables, conferenc
Graph Distillation for Action Detection with Privileged Modalities
We propose a technique that tackles action detection in multimodal videos
under a realistic and challenging condition in which only limited training data
and partially observed modalities are available. Common methods in transfer
learning do not take advantage of the extra modalities potentially available in
the source domain. On the other hand, previous work on multimodal learning only
focuses on a single domain or task and does not handle the modality discrepancy
between training and testing. In this work, we propose a method termed graph
distillation that incorporates rich privileged information from a large-scale
multimodal dataset in the source domain, and improves the learning in the
target domain where training data and modalities are scarce. We evaluate our
approach on action classification and detection tasks in multimodal videos, and
show that our model outperforms the state-of-the-art by a large margin on the
NTU RGB+D and PKU-MMD benchmarks. The code is released at
http://alan.vision/eccv18_graph/.Comment: ECCV 201
DDLSTM: Dual-Domain LSTM for Cross-Dataset Action Recognition
Domain alignment in convolutional networks aims to learn the degree of
layer-specific feature alignment beneficial to the joint learning of source and
target datasets. While increasingly popular in convolutional networks, there
have been no previous attempts to achieve domain alignment in recurrent
networks. Similar to spatial features, both source and target domains are
likely to exhibit temporal dependencies that can be jointly learnt and aligned.
In this paper we introduce Dual-Domain LSTM (DDLSTM), an architecture that is
able to learn temporal dependencies from two domains concurrently. It performs
cross-contaminated batch normalisation on both input-to-hidden and
hidden-to-hidden weights, and learns the parameters for cross-contamination,
for both single-layer and multi-layer LSTM architectures. We evaluate DDLSTM on
frame-level action recognition using three datasets, taking a pair at a time,
and report an average increase in accuracy of 3.5%. The proposed DDLSTM
architecture outperforms standard, fine-tuned, and batch-normalised LSTMs.Comment: To appear in CVPR 201
- …