1,570 research outputs found
Recurrent Attention Models for Depth-Based Person Identification
We present an attention-based model that reasons on human body shape and
motion dynamics to identify individuals in the absence of RGB information,
hence in the dark. Our approach leverages unique 4D spatio-temporal signatures
to address the identification problem across days. Formulated as a
reinforcement learning task, our model is based on a combination of
convolutional and recurrent neural networks with the goal of identifying small,
discriminative regions indicative of human identity. We demonstrate that our
model produces state-of-the-art results on several published datasets given
only depth images. We further study the robustness of our model towards
viewpoint, appearance, and volumetric changes. Finally, we share insights
gleaned from interpretable 2D, 3D, and 4D visualizations of our model's
spatio-temporal attention.Comment: Computer Vision and Pattern Recognition (CVPR) 201
Multi-Modal Human Authentication Using Silhouettes, Gait and RGB
Whole-body-based human authentication is a promising approach for remote
biometrics scenarios. Current literature focuses on either body recognition
based on RGB images or gait recognition based on body shapes and walking
patterns; both have their advantages and drawbacks. In this work, we propose
Dual-Modal Ensemble (DME), which combines both RGB and silhouette data to
achieve more robust performances for indoor and outdoor whole-body based
recognition. Within DME, we propose GaitPattern, which is inspired by the
double helical gait pattern used in traditional gait analysis. The GaitPattern
contributes to robust identification performance over a large range of viewing
angles. Extensive experimental results on the CASIA-B dataset demonstrate that
the proposed method outperforms state-of-the-art recognition systems. We also
provide experimental results using the newly collected BRIAR dataset
Gait Data Augmentation using Physics-Based Biomechanical Simulation
This paper focuses on addressing the problem of data scarcity for gait
analysis. Standard augmentation methods may produce gait sequences that are not
consistent with the biomechanical constraints of human walking. To address this
issue, we propose a novel framework for gait data augmentation by using
OpenSIM, a physics-based simulator, to synthesize biomechanically plausible
walking sequences. The proposed approach is validated by augmenting the WBDS
and CASIA-B datasets and then training gait-based classifiers for 3D gender
gait classification and 2D gait person identification respectively.
Experimental results indicate that our augmentation approach can improve the
performance of model-based gait classifiers and deliver state-of-the-art
results for gait-based person identification with an accuracy of up to 96.11%
on the CASIA-B dataset.Comment: 30 pages including references, 5 Figures submitted to ESW
Condition-Adaptive Graph Convolution Learning for Skeleton-Based Gait Recognition
Graph convolutional networks have been widely applied in skeleton-based gait
recognition. A key challenge in this task is to distinguish the individual
walking styles of different subjects across various views. Existing
state-of-the-art methods employ uniform convolutions to extract features from
diverse sequences and ignore the effects of viewpoint changes. To overcome
these limitations, we propose a condition-adaptive graph (CAG) convolution
network that can dynamically adapt to the specific attributes of each skeleton
sequence and the corresponding view angle. In contrast to using fixed weights
for all joints and sequences, we introduce a joint-specific filter learning
(JSFL) module in the CAG method, which produces sequence-adaptive filters at
the joint level. The adaptive filters capture fine-grained patterns that are
unique to each joint, enabling the extraction of diverse spatial-temporal
information about body parts. Additionally, we design a view-adaptive topology
learning (VATL) module that generates adaptive graph topologies. These graph
topologies are used to correlate the joints adaptively according to the
specific view conditions. Thus, CAG can simultaneously adjust to various
walking styles and viewpoints. Experiments on the two most widely used datasets
(i.e., CASIA-B and OU-MVLP) show that CAG surpasses all previous skeleton-based
methods. Moreover, the recognition performance can be enhanced by simply
combining CAG with appearance-based methods, demonstrating the ability of CAG
to provide useful complementary information.The source code will be available
at https://github.com/OliverHxh/CAG.Comment: Accepted by TIP journa
Radar and RGB-depth sensors for fall detection: a review
This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and usersâ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing
- âŠ