2 research outputs found
HybridGait: A Benchmark for Spatial-Temporal Cloth-Changing Gait Recognition with Hybrid Explorations
Existing gait recognition benchmarks mostly include minor clothing variations
in the laboratory environments, but lack persistent changes in appearance over
time and space. In this paper, we propose the first in-the-wild benchmark
CCGait for cloth-changing gait recognition, which incorporates diverse clothing
changes, indoor and outdoor scenes, and multi-modal statistics over 92 days. To
further address the coupling effect of clothing and viewpoint variations, we
propose a hybrid approach HybridGait that exploits both temporal dynamics and
the projected 2D information of 3D human meshes. Specifically, we introduce a
Canonical Alignment Spatial-Temporal Transformer (CA-STT) module to encode
human joint position-aware features, and fully exploit 3D dense priors via a
Silhouette-guided Deformation with 3D-2D Appearance Projection (SilD) strategy.
Our contributions are twofold: we provide a challenging benchmark CCGait that
captures realistic appearance changes across an expanded and space, and we
propose a hybrid framework HybridGait that outperforms prior works on CCGait
and Gait3D benchmarks. Our project page is available at
https://github.com/HCVLab/HybridGait