3 research outputs found
Exploring Spatial Significance via Hybrid Pyramidal Graph Network for Vehicle Re-identification
Existing vehicle re-identification methods commonly use spatial pooling
operations to aggregate feature maps extracted via off-the-shelf backbone
networks. They ignore exploring the spatial significance of feature maps,
eventually degrading the vehicle re-identification performance. In this paper,
firstly, an innovative spatial graph network (SGN) is proposed to elaborately
explore the spatial significance of feature maps. The SGN stacks multiple
spatial graphs (SGs). Each SG assigns feature map's elements as nodes and
utilizes spatial neighborhood relationships to determine edges among nodes.
During the SGN's propagation, each node and its spatial neighbors on an SG are
aggregated to the next SG. On the next SG, each aggregated node is re-weighted
with a learnable parameter to find the significance at the corresponding
location. Secondly, a novel pyramidal graph network (PGN) is designed to
comprehensively explore the spatial significance of feature maps at multiple
scales. The PGN organizes multiple SGNs in a pyramidal manner and makes each
SGN handles feature maps of a specific scale. Finally, a hybrid pyramidal graph
network (HPGN) is developed by embedding the PGN behind a ResNet-50 based
backbone network. Extensive experiments on three large scale vehicle databases
(i.e., VeRi776, VehicleID, and VeRi-Wild) demonstrate that the proposed HPGN is
superior to state-of-the-art vehicle re-identification approaches
The Devil is in the Details: Self-Supervised Attention for Vehicle Re-Identification
In recent years, the research community has approached the problem of vehicle
re-identification (re-id) with attention-based models, specifically focusing on
regions of a vehicle containing discriminative information. These re-id methods
rely on expensive key-point labels, part annotations, and additional attributes
including vehicle make, model, and color. Given the large number of vehicle
re-id datasets with various levels of annotations, strongly-supervised methods
are unable to scale across different domains. In this paper, we present
Self-supervised Attention for Vehicle Re-identification (SAVER), a novel
approach to effectively learn vehicle-specific discriminative features. Through
extensive experimentation, we show that SAVER improves upon the
state-of-the-art on challenging VeRi, VehicleID, Vehicle-1M and VERI-Wild
datasets.Comment: This work has been accepted European Conference on Computer Vision
(ECCV) 202
Part-Guided Attention Learning for Vehicle Instance Retrieval
Vehicle instance retrieval often requires one to recognize the fine-grained
visual differences between vehicles. Besides the holistic appearance of
vehicles which is easily affected by the viewpoint variation and distortion,
vehicle parts also provide crucial cues to differentiate near-identical
vehicles. Motivated by these observations, we introduce a Part-Guided Attention
Network (PGAN) to pinpoint the prominent part regions and effectively combine
the global and part information for discriminative feature learning. PGAN first
detects the locations of different part components and salient regions
regardless of the vehicle identity, which serve as the bottom-up attention to
narrow down the possible searching regions. To estimate the importance of
detected parts, we propose a Part Attention Module (PAM) to adaptively locate
the most discriminative regions with high-attention weights and suppress the
distraction of irrelevant parts with relatively low weights. The PAM is guided
by the instance retrieval loss and therefore provides top-down attention that
enables attention to be calculated at the level of car parts and other salient
regions. Finally, we aggregate the global appearance and part features to
improve the feature performance further. The PGAN combines part-guided
bottom-up and top-down attention, global and part visual features in an
end-to-end framework. Extensive experiments demonstrate that the proposed
method achieves new state-of-the-art vehicle instance retrieval performance on
four large-scale benchmark datasets.Comment: 12 page