85 research outputs found
Stochastic Attraction-Repulsion Embedding for Large Scale Image Localization
This paper tackles the problem of large-scale image-based localization (IBL)
where the spatial location of a query image is determined by finding out the
most similar reference images in a large database. For solving this problem, a
critical task is to learn discriminative image representation that captures
informative information relevant for localization. We propose a novel
representation learning method having higher location-discriminating power. It
provides the following contributions: 1) we represent a place (location) as a
set of exemplar images depicting the same landmarks and aim to maximize
similarities among intra-place images while minimizing similarities among
inter-place images; 2) we model a similarity measure as a probability
distribution on L_2-metric distances between intra-place and inter-place image
representations; 3) we propose a new Stochastic Attraction and Repulsion
Embedding (SARE) loss function minimizing the KL divergence between the learned
and the actual probability distributions; 4) we give theoretical comparisons
between SARE, triplet ranking and contrastive losses. It provides insights into
why SARE is better by analyzing gradients. Our SARE loss is easy to implement
and pluggable to any CNN. Experiments show that our proposed method improves
the localization performance on standard benchmarks by a large margin.
Demonstrating the broad applicability of our method, we obtained the third
place out of 209 teams in the 2018 Google Landmark Retrieval Challenge. Our
code and model are available at https://github.com/Liumouliu/deepIBL.Comment: ICC
Learning Adaptive Representations for Image Retrieval and Recognition
Content-based image retrieval is a core problem in computer vision. It has a wide range of application such as object and place recognition, digital library search, organizing image collections, and 3D reconstruction. However, robust and accurate image retrieval from a large-scale image collection still remains an open problem. For particular instance retrieval, challenges come not only from photometric and geometric changes between the query and the database images, but also from severe visual overlap with irrelevant images. On the other hand, large intra-class variation and inter-class similarity between semantic categories represents a major obstacle in semantic image retrieval and recognition. This dissertation explores learning image representations that adaptively focus on specific image content to tackle these challenges. For this purpose, three kinds of image contexts for discriminating relevant and irrelevant image content are exploited: (1) local image context, (2) semi-global image context, and (3) global image context. Novel models for learning adaptive image representations based on each context are introduced. Moreover, as a byproduct of training the proposed models, the underlying task-relevant contexts are automatically revealed from the data in a self-supervised manner. These include data-driven notion of good local mid-level features, task-relevant semi-global contexts with rich high-level information, and
the hierarchy of images. Experimental evaluation illustrates the superiority of the proposed methods in the applications of place recognition, scene categorization, and particular object retrieval.Doctor of Philosoph
Component-based Attention for Large-scale Trademark Retrieval
The demand for large-scale trademark retrieval (TR) systems has significantly
increased to combat the rise in international trademark infringement.
Unfortunately, the ranking accuracy of current approaches using either
hand-crafted or pre-trained deep convolution neural network (DCNN) features is
inadequate for large-scale deployments. We show in this paper that the ranking
accuracy of TR systems can be significantly improved by incorporating hard and
soft attention mechanisms, which direct attention to critical information such
as figurative elements and reduce attention given to distracting and
uninformative elements such as text and background. Our proposed approach
achieves state-of-the-art results on a challenging large-scale trademark
dataset.Comment: Fix typos related to authors' informatio
Mapping, Localization and Path Planning for Image-based Navigation using Visual Features and Map
Building on progress in feature representations for image retrieval,
image-based localization has seen a surge of research interest. Image-based
localization has the advantage of being inexpensive and efficient, often
avoiding the use of 3D metric maps altogether. That said, the need to maintain
a large number of reference images as an effective support of localization in a
scene, nonetheless calls for them to be organized in a map structure of some
kind.
The problem of localization often arises as part of a navigation process. We
are, therefore, interested in summarizing the reference images as a set of
landmarks, which meet the requirements for image-based navigation. A
contribution of this paper is to formulate such a set of requirements for the
two sub-tasks involved: map construction and self-localization. These
requirements are then exploited for compact map representation and accurate
self-localization, using the framework of a network flow problem. During this
process, we formulate the map construction and self-localization problems as
convex quadratic and second-order cone programs, respectively. We evaluate our
methods on publicly available indoor and outdoor datasets, where they
outperform existing methods significantly.Comment: CVPR 2019, for implementation see https://github.com/janinethom
- …