8 research outputs found
Action tube extraction based 3D-CNN for RGB-D action recognition
In this paper we propose a novel action tube extractor for RGB-D action recognition in trimmed videos. The action tube extractor takes as input a video and outputs an action tube. The method consists of two parts: spatial tube extraction and temporal sampling. The first part is built upon MobileNet-SSD and its role is to define the spatial region where the action takes place. The second part is based on the structural similarity index (SSIM) and is designed to remove frames without obvious motion from the primary action tube. The final extracted action tube has two benefits: 1) a higher ratio of ROI (subjects of action) to background; 2) most frames contain obvious motion change. We propose to use a two-stream (RGB and Depth) I3D architecture as our 3D-CNN model. Our approach outperforms the state-of-the-art methods on the OA and NTU RGB-D datasets. © 2018 IEEE.Peer ReviewedPostprint (published version
RGBD Datasets: Past, Present and Future
Since the launch of the Microsoft Kinect, scores of RGBD datasets have been
released. These have propelled advances in areas from reconstruction to gesture
recognition. In this paper we explore the field, reviewing datasets across
eight categories: semantics, object pose estimation, camera tracking, scene
reconstruction, object tracking, human actions, faces and identification. By
extracting relevant information in each category we help researchers to find
appropriate data for their needs, and we consider which datasets have succeeded
in driving computer vision forward and why.
Finally, we examine the future of RGBD datasets. We identify key areas which
are currently underexplored, and suggest that future directions may include
synthetic data and dense reconstructions of static and dynamic scenes.Comment: 8 pages excluding references (CVPR style
Two-Stream RNN/CNN for Action Recognition in 3D Videos
The recognition of actions from video sequences has many applications in
health monitoring, assisted living, surveillance, and smart homes. Despite
advances in sensing, in particular related to 3D video, the methodologies to
process the data are still subject to research. We demonstrate superior results
by a system which combines recurrent neural networks with convolutional neural
networks in a voting approach. The gated-recurrent-unit-based neural networks
are particularly well-suited to distinguish actions based on long-term
information from optical tracking data; the 3D-CNNs focus more on detailed,
recent information from video data. The resulting features are merged in an SVM
which then classifies the movement. In this architecture, our method improves
recognition rates of state-of-the-art methods by 14% on standard data sets.Comment: Published in 2017 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS
RGB-D-based Action Recognition Datasets: A Survey
Human action recognition from RGB-D (Red, Green, Blue and Depth) data has
attracted increasing attention since the first work reported in 2010. Over this
period, many benchmark datasets have been created to facilitate the development
and evaluation of new algorithms. This raises the question of which dataset to
select and how to use it in providing a fair and objective comparative
evaluation against state-of-the-art methods. To address this issue, this paper
provides a comprehensive review of the most commonly used action recognition
related RGB-D video datasets, including 27 single-view datasets, 10 multi-view
datasets, and 7 multi-person datasets. The detailed information and analysis of
these datasets is a useful resource in guiding insightful selection of datasets
for future research. In addition, the issues with current algorithm evaluation
vis-\'{a}-vis limitations of the available datasets and evaluation protocols
are also highlighted; resulting in a number of recommendations for collection
of new datasets and use of evaluation protocols
Action recognition in videos
In this project, our work can be divided into two parts: RGB-D based action recognition in trimmed videos and temporal action detection in untrimmed videos
Two-stream Multi-level Dynamic Point Transformer for Two-person Interaction Recognition
As a fundamental aspect of human life, two-person interactions contain
meaningful information about people's activities, relationships, and social
settings. Human action recognition serves as the foundation for many smart
applications, with a strong focus on personal privacy. However, recognizing
two-person interactions poses more challenges due to increased body occlusion
and overlap compared to single-person actions. In this paper, we propose a
point cloud-based network named Two-stream Multi-level Dynamic Point
Transformer for two-person interaction recognition. Our model addresses the
challenge of recognizing two-person interactions by incorporating local-region
spatial information, appearance information, and motion information. To achieve
this, we introduce a designed frame selection method named Interval Frame
Sampling (IFS), which efficiently samples frames from videos, capturing more
discriminative information in a relatively short processing time. Subsequently,
a frame features learning module and a two-stream multi-level feature
aggregation module extract global and partial features from the sampled frames,
effectively representing the local-region spatial information, appearance
information, and motion information related to the interactions. Finally, we
apply a transformer to perform self-attention on the learned features for the
final classification. Extensive experiments are conducted on two large-scale
datasets, the interaction subsets of NTU RGB+D 60 and NTU RGB+D 120. The
results show that our network outperforms state-of-the-art approaches across
all standard evaluation settings
Analyzing Human-Human Interactions: A Survey
Many videos depict people, and it is their interactions that inform us of
their activities, relation to one another and the cultural and social setting.
With advances in human action recognition, researchers have begun to address
the automated recognition of these human-human interactions from video. The
main challenges stem from dealing with the considerable variation in recording
setting, the appearance of the people depicted and the coordinated performance
of their interaction. This survey provides a summary of these challenges and
datasets to address these, followed by an in-depth discussion of relevant
vision-based recognition and detection methods. We focus on recent, promising
work based on deep learning and convolutional neural networks (CNNs). Finally,
we outline directions to overcome the limitations of the current
state-of-the-art to analyze and, eventually, understand social human actions