6 research outputs found
Efficient On-the-fly Category Retrieval using ConvNets and GPUs
We investigate the gains in precision and speed, that can be obtained by
using Convolutional Networks (ConvNets) for on-the-fly retrieval - where
classifiers are learnt at run time for a textual query from downloaded images,
and used to rank large image or video datasets.
We make three contributions: (i) we present an evaluation of state-of-the-art
image representations for object category retrieval over standard benchmark
datasets containing 1M+ images; (ii) we show that ConvNets can be used to
obtain features which are incredibly performant, and yet much lower dimensional
than previous state-of-the-art image representations, and that their
dimensionality can be reduced further without loss in performance by
compression using product quantization or binarization. Consequently, features
with the state-of-the-art performance on large-scale datasets of millions of
images can fit in the memory of even a commodity GPU card; (iii) we show that
an SVM classifier can be learnt within a ConvNet framework on a GPU in parallel
with downloading the new training images, allowing for a continuous refinement
of the model as more images become available, and simultaneous training and
ranking. The outcome is an on-the-fly system that significantly outperforms its
predecessors in terms of: precision of retrieval, memory requirements, and
speed, facilitating accurate on-the-fly learning and ranking in under a second
on a single GPU.Comment: Published in proceedings of ACCV 201
OnionNet: Sharing Features in Cascaded Deep Classifiers
The focus of our work is speeding up evaluation of deep neural networks in
retrieval scenarios, where conventional architectures may spend too much time
on negative examples. We propose to replace a monolithic network with our novel
cascade of feature-sharing deep classifiers, called OnionNet, where subsequent
stages may add both new layers as well as new feature channels to the previous
ones. Importantly, intermediate feature maps are shared among classifiers,
preventing them from the necessity of being recomputed. To accomplish this, the
model is trained end-to-end in a principled way under a joint loss. We validate
our approach in theory and on a synthetic benchmark. As a result demonstrated
in three applications (patch matching, object detection, and image retrieval),
our cascade can operate significantly faster than both monolithic networks and
traditional cascades without sharing at the cost of marginal decrease in
precision.Comment: Accepted to BMVC 201
SUBIC: A supervised, structured binary code for image search
For large-scale visual search, highly compressed yet meaningful
representations of images are essential. Structured vector quantizers based on
product quantization and its variants are usually employed to achieve such
compression while minimizing the loss of accuracy. Yet, unlike binary hashing
schemes, these unsupervised methods have not yet benefited from the
supervision, end-to-end learning and novel architectures ushered in by the deep
learning revolution. We hence propose herein a novel method to make deep
convolutional neural networks produce supervised, compact, structured binary
codes for visual search. Our method makes use of a novel block-softmax
non-linearity and of batch-based entropy losses that together induce structure
in the learned encodings. We show that our method outperforms state-of-the-art
compact representations based on deep hashing or structured quantization in
single and cross-domain category retrieval, instance retrieval and
classification. We make our code and models publicly available online.Comment: Accepted at ICCV 2017 (Spotlight
Compact Deep Aggregation for Set Retrieval
The objective of this work is to learn a compact embedding of a set of
descriptors that is suitable for efficient retrieval and ranking, whilst
maintaining discriminability of the individual descriptors. We focus on a
specific example of this general problem -- that of retrieving images
containing multiple faces from a large scale dataset of images. Here the set
consists of the face descriptors in each image, and given a query for multiple
identities, the goal is then to retrieve, in order, images which contain all
the identities, all but one, \etc
To this end, we make the following contributions: first, we propose a CNN
architecture -- {\em SetNet} -- to achieve the objective: it learns face
descriptors and their aggregation over a set to produce a compact fixed length
descriptor designed for set retrieval, and the score of an image is a count of
the number of identities that match the query; second, we show that this
compact descriptor has minimal loss of discriminability up to two faces per
image, and degrades slowly after that -- far exceeding a number of baselines;
third, we explore the speed vs.\ retrieval quality trade-off for set retrieval
using this compact descriptor; and, finally, we collect and annotate a large
dataset of images containing various number of celebrities, which we use for
evaluation and is publicly released.Comment: 20 page