2,439 research outputs found
3DFeat-Net: Weakly Supervised Local 3D Features for Point Cloud Registration
In this paper, we propose the 3DFeat-Net which learns both 3D feature
detector and descriptor for point cloud matching using weak supervision. Unlike
many existing works, we do not require manual annotation of matching point
clusters. Instead, we leverage on alignment and attention mechanisms to learn
feature correspondences from GPS/INS tagged 3D point clouds without explicitly
specifying them. We create training and benchmark outdoor Lidar datasets, and
experiments show that 3DFeat-Net obtains state-of-the-art performance on these
gravity-aligned datasets.Comment: 17 pages, 6 figures. Accepted in ECCV 201
Pose-Guided Multi-Granularity Attention Network for Text-Based Person Search
Text-based person search aims to retrieve the corresponding person images in
an image database by virtue of a describing sentence about the person, which
poses great potential for various applications such as video surveillance.
Extracting visual contents corresponding to the human description is the key to
this cross-modal matching problem. Moreover, correlated images and descriptions
involve different granularities of semantic relevance, which is usually ignored
in previous methods. To exploit the multilevel corresponding visual contents,
we propose a pose-guided multi-granularity attention network (PMA). Firstly, we
propose a coarse alignment network (CA) to select the related image regions to
the global description by a similarity-based attention. To further capture the
phrase-related visual body part, a fine-grained alignment network (FA) is
proposed, which employs pose information to learn latent semantic alignment
between visual body part and textual noun phrase. To verify the effectiveness
of our model, we perform extensive experiments on the CUHK Person Description
Dataset (CUHK-PEDES) which is currently the only available dataset for
text-based person search. Experimental results show that our approach
outperforms the state-of-the-art methods by 15 \% in terms of the top-1 metric.Comment: published in AAAI2020(oral
Comparator Networks
The objective of this work is set-based verification, e.g. to decide if two
sets of images of a face are of the same person or not. The traditional
approach to this problem is to learn to generate a feature vector per image,
aggregate them into one vector to represent the set, and then compute the
cosine similarity between sets. Instead, we design a neural network
architecture that can directly learn set-wise verification. Our contributions
are: (i) We propose a Deep Comparator Network (DCN) that can ingest a pair of
sets (each may contain a variable number of images) as inputs, and compute a
similarity between the pair--this involves attending to multiple discriminative
local regions (landmarks), and comparing local descriptors between pairs of
faces; (ii) To encourage high-quality representations for each set, internal
competition is introduced for recalibration based on the landmark score; (iii)
Inspired by image retrieval, a novel hard sample mining regime is proposed to
control the sampling process, such that the DCN is complementary to the
standard image classification models. Evaluations on the IARPA Janus face
recognition benchmarks show that the comparator networks outperform the
previous state-of-the-art results by a large margin.Comment: To appear in ECCV 201
D2-Net: A Trainable CNN for Joint Detection and Description of Local Features
In this work we address the problem of finding reliable pixel-level
correspondences under difficult imaging conditions. We propose an approach
where a single convolutional neural network plays a dual role: It is
simultaneously a dense feature descriptor and a feature detector. By postponing
the detection to a later stage, the obtained keypoints are more stable than
their traditional counterparts based on early detection of low-level
structures. We show that this model can be trained using pixel correspondences
extracted from readily available large-scale SfM reconstructions, without any
further annotations. The proposed method obtains state-of-the-art performance
on both the difficult Aachen Day-Night localization dataset and the InLoc
indoor localization benchmark, as well as competitive performance on other
benchmarks for image matching and 3D reconstruction.Comment: Accepted at CVPR 201
- …