377 research outputs found
Cross-Modality Paired-Images Generation for RGB-Infrared Person Re-Identification
RGB-Infrared (IR) person re-identification is very challenging due to the
large cross-modality variations between RGB and IR images. The key solution is
to learn aligned features to the bridge RGB and IR modalities. However, due to
the lack of correspondence labels between every pair of RGB and IR images, most
methods try to alleviate the variations with set-level alignment by reducing
the distance between the entire RGB and IR sets. However, this set-level
alignment may lead to misalignment of some instances, which limits the
performance for RGB-IR Re-ID. Different from existing methods, in this paper,
we propose to generate cross-modality paired-images and perform both global
set-level and fine-grained instance-level alignments. Our proposed method
enjoys several merits. First, our method can perform set-level alignment by
disentangling modality-specific and modality-invariant features. Compared with
conventional methods, ours can explicitly remove the modality-specific features
and the modality variation can be better reduced. Second, given cross-modality
unpaired-images of a person, our method can generate cross-modality paired
images from exchanged images. With them, we can directly perform instance-level
alignment by minimizing distances of every pair of images. Extensive
experimental results on two standard benchmarks demonstrate that the proposed
model favourably against state-of-the-art methods. Especially, on SYSU-MM01
dataset, our model can achieve a gain of 9.2% and 7.7% in terms of Rank-1 and
mAP. Code is available at https://github.com/wangguanan/JSIA-ReID.Comment: accepted by AAAI'2
A Survey of Face Recognition
Recent years witnessed the breakthrough of face recognition with deep
convolutional neural networks. Dozens of papers in the field of FR are
published every year. Some of them were applied in the industrial community and
played an important role in human life such as device unlock, mobile payment,
and so on. This paper provides an introduction to face recognition, including
its history, pipeline, algorithms based on conventional manually designed
features or deep learning, mainstream training, evaluation datasets, and
related applications. We have analyzed and compared state-of-the-art works as
many as possible, and also carefully designed a set of experiments to find the
effect of backbone size and data distribution. This survey is a material of the
tutorial named The Practical Face Recognition Technology in the Industrial
World in the FG2023
Re-Identification in Urban Scenarios: A Review of Tools and Methods
With the widespread use of surveillance image cameras and enhanced awareness of public security, objects, and persons Re-Identification (ReID), the task of recognizing objects in non-overlapping camera networks has attracted particular attention in computer vision and pattern recognition communities. Given an image or video of an object-of-interest (query), object identification aims to identify the object from images or video feed taken from different cameras. After many years of great effort, object ReID remains a notably challenging task. The main reason is that an object's appearance may dramatically change across camera views due to significant variations in illumination, poses or viewpoints, or even cluttered backgrounds. With the advent of Deep Neural Networks (DNN), there have been many proposals for different network architectures achieving high-performance levels. With the aim of identifying the most promising methods for ReID for future robust implementations, a review study is presented, mainly focusing on the person and multi-object ReID and auxiliary methods for image enhancement. Such methods are crucial for robust object ReID, while highlighting limitations of the identified methods. This is a very active field, evidenced by the dates of the publications found. However, most works use data from very different datasets and genres, which presents an obstacle to wide generalized DNN model training and usage. Although the model's performance has achieved satisfactory results on particular datasets, a particular trend was observed in the use of 3D Convolutional Neural Networks (CNN), attention mechanisms to capture object-relevant features, and generative adversarial training to overcome data limitations. However, there is still room for improvement, namely in using images from urban scenarios among anonymized images to comply with public privacy legislation. The main challenges that remain in the ReID field, and prospects for future research directions towards ReID in dense urban scenarios, are also discussed
GraFT: Gradual Fusion Transformer for Multimodal Re-Identification
Object Re-Identification (ReID) is pivotal in computer vision, witnessing an
escalating demand for adept multimodal representation learning. Current models,
although promising, reveal scalability limitations with increasing modalities
as they rely heavily on late fusion, which postpones the integration of
specific modality insights. Addressing this, we introduce the \textbf{Gradual
Fusion Transformer (GraFT)} for multimodal ReID. At its core, GraFT employs
learnable fusion tokens that guide self-attention across encoders, adeptly
capturing both modality-specific and object-specific features. Further
bolstering its efficacy, we introduce a novel training paradigm combined with
an augmented triplet loss, optimizing the ReID feature embedding space. We
demonstrate these enhancements through extensive ablation studies and show that
GraFT consistently surpasses established multimodal ReID benchmarks.
Additionally, aiming for deployment versatility, we've integrated neural
network pruning into GraFT, offering a balance between model size and
performance.Comment: 3 Borderline Reviews at WACV, 8 pages, 5 figures, 8 table
- …