64 research outputs found
Hotels-50K: A Global Hotel Recognition Dataset
Recognizing a hotel from an image of a hotel room is important for human
trafficking investigations. Images directly link victims to places and can help
verify where victims have been trafficked, and where their traffickers might
move them or others in the future. Recognizing the hotel from images is
challenging because of low image quality, uncommon camera perspectives, large
occlusions (often the victim), and the similarity of objects (e.g., furniture,
art, bedding) across different hotel rooms.
To support efforts towards this hotel recognition task, we have curated a
dataset of over 1 million annotated hotel room images from 50,000 hotels. These
images include professionally captured photographs from travel websites and
crowd-sourced images from a mobile application, which are more similar to the
types of images analyzed in real-world investigations. We present a baseline
approach based on a standard network architecture and a collection of
data-augmentation approaches tuned to this problem domain
Cross-View Image Matching for Geo-localization in Urban Environments
In this paper, we address the problem of cross-view image geo-localization.
Specifically, we aim to estimate the GPS location of a query street view image
by finding the matching images in a reference database of geo-tagged bird's eye
view images, or vice versa. To this end, we present a new framework for
cross-view image geo-localization by taking advantage of the tremendous success
of deep convolutional neural networks (CNNs) in image classification and object
detection. First, we employ the Faster R-CNN to detect buildings in the query
and reference images. Next, for each building in the query image, we retrieve
the nearest neighbors from the reference buildings using a Siamese network
trained on both positive matching image pairs and negative pairs. To find the
correct NN for each query building, we develop an efficient multiple nearest
neighbors matching method based on dominant sets. We evaluate the proposed
framework on a new dataset that consists of pairs of street view and bird's eye
view images. Experimental results show that the proposed method achieves better
geo-localization accuracy than other approaches and is able to generalize to
images at unseen locations
Improving Image Classification with Location Context
With the widespread availability of cellphones and cameras that have GPS
capabilities, it is common for images being uploaded to the Internet today to
have GPS coordinates associated with them. In addition to research that tries
to predict GPS coordinates from visual features, this also opens up the door to
problems that are conditioned on the availability of GPS coordinates. In this
work, we tackle the problem of performing image classification with location
context, in which we are given the GPS coordinates for images in both the train
and test phases. We explore different ways of encoding and extracting features
from the GPS coordinates, and show how to naturally incorporate these features
into a Convolutional Neural Network (CNN), the current state-of-the-art for
most image classification and recognition problems. We also show how it is
possible to simultaneously learn the optimal pooling radii for a subset of our
features within the CNN framework. To evaluate our model and to help promote
research in this area, we identify a set of location-sensitive concepts and
annotate a subset of the Yahoo Flickr Creative Commons 100M dataset that has
GPS coordinates with these concepts, which we make publicly available. By
leveraging location context, we are able to achieve almost a 7% gain in mean
average precision
PlaNet - Photo Geolocation with Convolutional Neural Networks
Is it possible to build a system to determine the location where a photo was
taken using just its pixels? In general, the problem seems exceptionally
difficult: it is trivial to construct situations where no location can be
inferred. Yet images often contain informative cues such as landmarks, weather
patterns, vegetation, road markings, and architectural details, which in
combination may allow one to determine an approximate location and occasionally
an exact location. Websites such as GeoGuessr and View from your Window suggest
that humans are relatively good at integrating these cues to geolocate images,
especially en-masse. In computer vision, the photo geolocation problem is
usually approached using image retrieval methods. In contrast, we pose the
problem as one of classification by subdividing the surface of the earth into
thousands of multi-scale geographic cells, and train a deep network using
millions of geotagged images. While previous approaches only recognize
landmarks or perform approximate matching using global image descriptors, our
model is able to use and integrate multiple visible cues. We show that the
resulting model, called PlaNet, outperforms previous approaches and even
attains superhuman levels of accuracy in some cases. Moreover, we extend our
model to photo albums by combining it with a long short-term memory (LSTM)
architecture. By learning to exploit temporal coherence to geolocate uncertain
photos, we demonstrate that this model achieves a 50% performance improvement
over the single-image model
- …