104 research outputs found
Cross-spectral local descriptors via quadruplet network
This paper presents a novel CNN-based architecture, referred to as Q-Net, to learn local feature descriptors that are useful for matching image patches from two different spectral bands. Given correctly matched and non-matching cross-spectral image pairs, a quadruplet network is trained to map input image patches to a common Euclidean space, regardless of the input spectral band. Our approach is inspired by the recent success of triplet networks in the visible spectrum, but adapted for cross-spectral scenarios, where, for each matching pair, there are always two possible non-matching patches: one for each spectrum. Experimental evaluations on a public cross-spectral VIS-NIR dataset shows that the proposed approach improves the state-of-the-art. Moreover, the proposed technique can also be used in mono-spectral settings, obtaining a similar performance to triplet network descriptors, but requiring less training data
Deep Adaptive Feature Embedding with Local Sample Distributions for Person Re-identification
Person re-identification (re-id) aims to match pedestrians observed by
disjoint camera views. It attracts increasing attention in computer vision due
to its importance to surveillance system. To combat the major challenge of
cross-view visual variations, deep embedding approaches are proposed by
learning a compact feature space from images such that the Euclidean distances
correspond to their cross-view similarity metric. However, the global Euclidean
distance cannot faithfully characterize the ideal similarity in a complex
visual feature space because features of pedestrian images exhibit unknown
distributions due to large variations in poses, illumination and occlusion.
Moreover, intra-personal training samples within a local range are robust to
guide deep embedding against uncontrolled variations, which however, cannot be
captured by a global Euclidean distance. In this paper, we study the problem of
person re-id by proposing a novel sampling to mine suitable \textit{positives}
(i.e. intra-class) within a local range to improve the deep embedding in the
context of large intra-class variations. Our method is capable of learning a
deep similarity metric adaptive to local sample structure by minimizing each
sample's local distances while propagating through the relationship between
samples to attain the whole intra-class minimization. To this end, a novel
objective function is proposed to jointly optimize similarity metric learning,
local positive mining and robust deep embedding. This yields local
discriminations by selecting local-ranged positive samples, and the learned
features are robust to dramatic intra-class variations. Experiments on
benchmarks show state-of-the-art results achieved by our method.Comment: Published on Pattern Recognitio
LiDAR-Based Place Recognition For Autonomous Driving: A Survey
LiDAR-based place recognition (LPR) plays a pivotal role in autonomous
driving, which assists Simultaneous Localization and Mapping (SLAM) systems in
reducing accumulated errors and achieving reliable localization. However,
existing reviews predominantly concentrate on visual place recognition (VPR)
methods. Despite the recent remarkable progress in LPR, to the best of our
knowledge, there is no dedicated systematic review in this area. This paper
bridges the gap by providing a comprehensive review of place recognition
methods employing LiDAR sensors, thus facilitating and encouraging further
research. We commence by delving into the problem formulation of place
recognition, exploring existing challenges, and describing relations to
previous surveys. Subsequently, we conduct an in-depth review of related
research, which offers detailed classifications, strengths and weaknesses, and
architectures. Finally, we summarize existing datasets, commonly used
evaluation metrics, and comprehensive evaluation results from various methods
on public datasets. This paper can serve as a valuable tutorial for newcomers
entering the field of place recognition and for researchers interested in
long-term robot localization. We pledge to maintain an up-to-date project on
our website https://github.com/ShiPC-AI/LPR-Survey.Comment: 26 pages,13 figures, 5 table
AutoMerge: A Framework for Map Assembling and Smoothing in City-scale Environments
We present AutoMerge, a LiDAR data processing framework for assembling a
large number of map segments into a complete map. Traditional large-scale map
merging methods are fragile to incorrect data associations, and are primarily
limited to working only offline. AutoMerge utilizes multi-perspective fusion
and adaptive loop closure detection for accurate data associations, and it uses
incremental merging to assemble large maps from individual trajectory segments
given in random order and with no initial estimations. Furthermore, after
assembling the segments, AutoMerge performs fine matching and pose-graph
optimization to globally smooth the merged map. We demonstrate AutoMerge on
both city-scale merging (120km) and campus-scale repeated merging (4.5km x 8).
The experiments show that AutoMerge (i) surpasses the second- and third- best
methods by 14% and 24% recall in segment retrieval, (ii) achieves comparable 3D
mapping accuracy for 120 km large-scale map assembly, (iii) and it is robust to
temporally-spaced revisits. To the best of our knowledge, AutoMerge is the
first mapping approach that can merge hundreds of kilometers of individual
segments without the aid of GPS.Comment: 18 pages, 18 figur
Deep Image Retrieval: A Survey
In recent years a vast amount of visual content has been generated and shared
from various fields, such as social media platforms, medical images, and
robotics. This abundance of content creation and sharing has introduced new
challenges. In particular, searching databases for similar content, i.e.content
based image retrieval (CBIR), is a long-established research area, and more
efficient and accurate methods are needed for real time retrieval. Artificial
intelligence has made progress in CBIR and has significantly facilitated the
process of intelligent search. In this survey we organize and review recent
CBIR works that are developed based on deep learning algorithms and techniques,
including insights and techniques from recent papers. We identify and present
the commonly-used benchmarks and evaluation methods used in the field. We
collect common challenges and propose promising future directions. More
specifically, we focus on image retrieval with deep learning and organize the
state of the art methods according to the types of deep network structure, deep
features, feature enhancement methods, and network fine-tuning strategies. Our
survey considers a wide variety of recent methods, aiming to promote a global
view of the field of instance-based CBIR.Comment: 20 pages, 11 figure
- …