1,027 research outputs found
Comparing Feature Detectors: A bias in the repeatability criteria, and how to correct it
Most computer vision application rely on algorithms finding local
correspondences between different images. These algorithms detect and compare
stable local invariant descriptors centered at scale-invariant keypoints.
Because of the importance of the problem, new keypoint detectors and
descriptors are constantly being proposed, each one claiming to perform better
(or to be complementary) to the preceding ones. This raises the question of a
fair comparison between very diverse methods. This evaluation has been mainly
based on a repeatability criterion of the keypoints under a series of image
perturbations (blur, illumination, noise, rotations, homotheties, homographies,
etc). In this paper, we argue that the classic repeatability criterion is
biased towards algorithms producing redundant overlapped detections. To
compensate this bias, we propose a variant of the repeatability rate taking
into account the descriptors overlap. We apply this variant to revisit the
popular benchmark by Mikolajczyk et al., on classic and new feature detectors.
Experimental evidence shows that the hierarchy of these feature detectors is
severely disrupted by the amended comparator.Comment: Fixed typo in affiliation
Affine Subspace Representation for Feature Description
This paper proposes a novel Affine Subspace Representation (ASR) descriptor
to deal with affine distortions induced by viewpoint changes. Unlike the
traditional local descriptors such as SIFT, ASR inherently encodes local
information of multi-view patches, making it robust to affine distortions while
maintaining a high discriminative ability. To this end, PCA is used to
represent affine-warped patches as PCA-patch vectors for its compactness and
efficiency. Then according to the subspace assumption, which implies that the
PCA-patch vectors of various affine-warped patches of the same keypoint can be
represented by a low-dimensional linear subspace, the ASR descriptor is
obtained by using a simple subspace-to-point mapping. Such a linear subspace
representation could accurately capture the underlying information of a
keypoint (local structure) under multiple views without sacrificing its
distinctiveness. To accelerate the computation of ASR descriptor, a fast
approximate algorithm is proposed by moving the most computational part (ie,
warp patch under various affine transformations) to an offline training stage.
Experimental results show that ASR is not only better than the state-of-the-art
descriptors under various image transformations, but also performs well without
a dedicated affine invariant detector when dealing with viewpoint changes.Comment: To Appear in the 2014 European Conference on Computer Visio
D2-Net: A Trainable CNN for Joint Detection and Description of Local Features
In this work we address the problem of finding reliable pixel-level
correspondences under difficult imaging conditions. We propose an approach
where a single convolutional neural network plays a dual role: It is
simultaneously a dense feature descriptor and a feature detector. By postponing
the detection to a later stage, the obtained keypoints are more stable than
their traditional counterparts based on early detection of low-level
structures. We show that this model can be trained using pixel correspondences
extracted from readily available large-scale SfM reconstructions, without any
further annotations. The proposed method obtains state-of-the-art performance
on both the difficult Aachen Day-Night localization dataset and the InLoc
indoor localization benchmark, as well as competitive performance on other
benchmarks for image matching and 3D reconstruction.Comment: Accepted at CVPR 201
Convolutional neural network architecture for geometric matching
We address the problem of determining correspondences between two images in
agreement with a geometric model such as an affine or thin-plate spline
transformation, and estimating its parameters. The contributions of this work
are three-fold. First, we propose a convolutional neural network architecture
for geometric matching. The architecture is based on three main components that
mimic the standard steps of feature extraction, matching and simultaneous
inlier detection and model parameter estimation, while being trainable
end-to-end. Second, we demonstrate that the network parameters can be trained
from synthetically generated imagery without the need for manual annotation and
that our matching layer significantly increases generalization capabilities to
never seen before images. Finally, we show that the same model can perform both
instance-level and category-level matching giving state-of-the-art results on
the challenging Proposal Flow dataset.Comment: In 2017 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR 2017
Scale Invariant Interest Points with Shearlets
Shearlets are a relatively new directional multi-scale framework for signal
analysis, which have been shown effective to enhance signal discontinuities
such as edges and corners at multiple scales. In this work we address the
problem of detecting and describing blob-like features in the shearlets
framework. We derive a measure which is very effective for blob detection and
closely related to the Laplacian of Gaussian. We demonstrate the measure
satisfies the perfect scale invariance property in the continuous case. In the
discrete setting, we derive algorithms for blob detection and keypoint
description. Finally, we provide qualitative justifications of our findings as
well as a quantitative evaluation on benchmark data. We also report an
experimental evidence that our method is very suitable to deal with compressed
and noisy images, thanks to the sparsity property of shearlets
- …