22,833 research outputs found
Group Invariant Deep Representations for Image Instance Retrieval
Most image instance retrieval pipelines are based on comparison of vectors
known as global image descriptors between a query image and the database
images. Due to their success in large scale image classification,
representations extracted from Convolutional Neural Networks (CNN) are quickly
gaining ground on Fisher Vectors (FVs) as state-of-the-art global descriptors
for image instance retrieval. While CNN-based descriptors are generally
remarked for good retrieval performance at lower bitrates, they nevertheless
present a number of drawbacks including the lack of robustness to common object
transformations such as rotations compared with their interest point based FV
counterparts.
In this paper, we propose a method for computing invariant global descriptors
from CNNs. Our method implements a recently proposed mathematical theory for
invariance in a sensory cortex modeled as a feedforward neural network. The
resulting global descriptors can be made invariant to multiple arbitrary
transformation groups while retaining good discriminativeness.
Based on a thorough empirical evaluation using several publicly available
datasets, we show that our method is able to significantly and consistently
improve retrieval results every time a new type of invariance is incorporated.
We also show that our method which has few parameters is not prone to
overfitting: improvements generalize well across datasets with different
properties with regard to invariances. Finally, we show that our descriptors
are able to compare favourably to other state-of-the-art compact descriptors in
similar bitranges, exceeding the highest retrieval results reported in the
literature on some datasets. A dedicated dimensionality reduction step
--quantization or hashing-- may be able to further improve the competitiveness
of the descriptors
Dynamic match kernel with deep convolutional features for image retrieval
For image retrieval methods based on bag of visual words, much attention has been paid to enhancing the discriminative powers of the local features. Although retrieved images are usually similar to a query in minutiae, they may be significantly different from a semantic perspective, which can be effectively distinguished by convolutional neural networks (CNN). Such images should not be considered as relevant pairs. To tackle this problem, we propose to construct a dynamic match kernel by adaptively calculating the matching thresholds between query and candidate images based on the pairwise distance among deep CNN features. In contrast to the typical static match kernel which is independent to the global appearance of retrieved images, the dynamic one leverages the semantical similarity as a constraint for determining the matches. Accordingly, we propose a semantic-constrained retrieval framework by incorporating the dynamic match kernel, which focuses on matched patches between relevant images and filters out the ones for irrelevant pairs. Furthermore, we demonstrate that the proposed kernel complements recent methods, such as hamming embedding, multiple assignment, local descriptors aggregation, and graph-based re-ranking, while it outperforms the static one under various settings on off-the-shelf evaluation metrics. We also propose to evaluate the matched patches both quantitatively and qualitatively. Extensive experiments on five benchmark data sets and large-scale distractors validate the merits of the proposed method against the state-of-the-art methods for image retrieval
HPatches: A benchmark and evaluation of handcrafted and learned local descriptors
In this paper, we propose a novel benchmark for evaluating local image
descriptors. We demonstrate that the existing datasets and evaluation protocols
do not specify unambiguously all aspects of evaluation, leading to ambiguities
and inconsistencies in results reported in the literature. Furthermore, these
datasets are nearly saturated due to the recent improvements in local
descriptors obtained by learning them from large annotated datasets. Therefore,
we introduce a new large dataset suitable for training and testing modern
descriptors, together with strictly defined evaluation protocols in several
tasks such as matching, retrieval and classification. This allows for more
realistic, and thus more reliable comparisons in different application
scenarios. We evaluate the performance of several state-of-the-art descriptors
and analyse their properties. We show that a simple normalisation of
traditional hand-crafted descriptors can boost their performance to the level
of deep learning based descriptors within a realistic benchmarks evaluation
HBST: A Hamming Distance embedding Binary Search Tree for Visual Place Recognition
Reliable and efficient Visual Place Recognition is a major building block of
modern SLAM systems. Leveraging on our prior work, in this paper we present a
Hamming Distance embedding Binary Search Tree (HBST) approach for binary
Descriptor Matching and Image Retrieval. HBST allows for descriptor Search and
Insertion in logarithmic time by exploiting particular properties of binary
Feature descriptors. We support the idea behind our search structure with a
thorough analysis on the exploited descriptor properties and their effects on
completeness and complexity of search and insertion. To validate our claims we
conducted comparative experiments for HBST and several state-of-the-art methods
on a broad range of publicly available datasets. HBST is available as a compact
open-source C++ header-only library.Comment: Submitted to IEEE Robotics and Automation Letters (RA-L) 2018 with
International Conference on Intelligent Robots and Systems (IROS) 2018
option, 8 pages, 10 figure
Deep Shape Matching
We cast shape matching as metric learning with convolutional networks. We
break the end-to-end process of image representation into two parts. Firstly,
well established efficient methods are chosen to turn the images into edge
maps. Secondly, the network is trained with edge maps of landmark images, which
are automatically obtained by a structure-from-motion pipeline. The learned
representation is evaluated on a range of different tasks, providing
improvements on challenging cases of domain generalization, generic
sketch-based image retrieval or its fine-grained counterpart. In contrast to
other methods that learn a different model per task, object category, or
domain, we use the same network throughout all our experiments, achieving
state-of-the-art results in multiple benchmarks.Comment: ECCV 201
Asymmetric Feature Maps with Application to Sketch Based Retrieval
We propose a novel concept of asymmetric feature maps (AFM), which allows to
evaluate multiple kernels between a query and database entries without
increasing the memory requirements. To demonstrate the advantages of the AFM
method, we derive a short vector image representation that, due to asymmetric
feature maps, supports efficient scale and translation invariant sketch-based
image retrieval. Unlike most of the short-code based retrieval systems, the
proposed method provides the query localization in the retrieved image. The
efficiency of the search is boosted by approximating a 2D translation search
via trigonometric polynomial of scores by 1D projections. The projections are a
special case of AFM. An order of magnitude speed-up is achieved compared to
traditional trigonometric polynomials. The results are boosted by an
image-based average query expansion, exceeding significantly the state of the
art on standard benchmarks.Comment: CVPR 201
- …