12 research outputs found
Learning Cross-Modal Deep Embeddings for Multi-Object Image Retrieval using Text and Sketch
In this work we introduce a cross modal image retrieval system that allows
both text and sketch as input modalities for the query. A cross-modal deep
network architecture is formulated to jointly model the sketch and text input
modalities as well as the the image output modality, learning a common
embedding between text and images and between sketches and images. In addition,
an attention model is used to selectively focus the attention on the different
objects of the image, allowing for retrieval with multiple objects in the
query. Experiments show that the proposed method performs the best in both
single and multiple object image retrieval in standard datasets.Comment: Accepted at ICPR 201
Doodle to Search: Practical Zero-Shot Sketch-based Image Retrieval
In this paper, we investigate the problem of zero-shot sketch-based image
retrieval (ZS-SBIR), where human sketches are used as queries to conduct
retrieval of photos from unseen categories. We importantly advance prior arts
by proposing a novel ZS-SBIR scenario that represents a firm step forward in
its practical application. The new setting uniquely recognizes two important
yet often neglected challenges of practical ZS-SBIR, (i) the large domain gap
between amateur sketch and photo, and (ii) the necessity for moving towards
large-scale retrieval. We first contribute to the community a novel ZS-SBIR
dataset, QuickDraw-Extended, that consists of 330,000 sketches and 204,000
photos spanning across 110 categories. Highly abstract amateur human sketches
are purposefully sourced to maximize the domain gap, instead of ones included
in existing datasets that can often be semi-photorealistic. We then formulate a
ZS-SBIR framework to jointly model sketches and photos into a common embedding
space. A novel strategy to mine the mutual information among domains is
specifically engineered to alleviate the domain gap. External semantic
knowledge is further embedded to aid semantic transfer. We show that, rather
surprisingly, retrieval performance significantly outperforms that of
state-of-the-art on existing datasets that can already be achieved using a
reduced version of our model. We further demonstrate the superior performance
of our full model by comparing with a number of alternatives on the newly
proposed dataset. The new dataset, plus all training and testing code of our
model, will be publicly released to facilitate future researchComment: Oral paper in CVPR 201
Asymmetric Feature Maps with Application to Sketch Based Retrieval
We propose a novel concept of asymmetric feature maps (AFM), which allows to
evaluate multiple kernels between a query and database entries without
increasing the memory requirements. To demonstrate the advantages of the AFM
method, we derive a short vector image representation that, due to asymmetric
feature maps, supports efficient scale and translation invariant sketch-based
image retrieval. Unlike most of the short-code based retrieval systems, the
proposed method provides the query localization in the retrieved image. The
efficiency of the search is boosted by approximating a 2D translation search
via trigonometric polynomial of scores by 1D projections. The projections are a
special case of AFM. An order of magnitude speed-up is achieved compared to
traditional trigonometric polynomials. The results are boosted by an
image-based average query expansion, exceeding significantly the state of the
art on standard benchmarks.Comment: CVPR 201
Deep Sketch Hashing: Fast Free-hand Sketch-Based Image Retrieval
Free-hand sketch-based image retrieval (SBIR) is a specific cross-view
retrieval task, in which queries are abstract and ambiguous sketches while the
retrieval database is formed with natural images. Work in this area mainly
focuses on extracting representative and shared features for sketches and
natural images. However, these can neither cope well with the geometric
distortion between sketches and images nor be feasible for large-scale SBIR due
to the heavy continuous-valued distance computation. In this paper, we speed up
SBIR by introducing a novel binary coding method, named \textbf{Deep Sketch
Hashing} (DSH), where a semi-heterogeneous deep architecture is proposed and
incorporated into an end-to-end binary coding framework. Specifically, three
convolutional neural networks are utilized to encode free-hand sketches,
natural images and, especially, the auxiliary sketch-tokens which are adopted
as bridges to mitigate the sketch-image geometric distortion. The learned DSH
codes can effectively capture the cross-view similarities as well as the
intrinsic semantic correlations between different categories. To the best of
our knowledge, DSH is the first hashing work specifically designed for
category-level SBIR with an end-to-end deep architecture. The proposed DSH is
comprehensively evaluated on two large-scale datasets of TU-Berlin Extension
and Sketchy, and the experiments consistently show DSH's superior SBIR
accuracies over several state-of-the-art methods, while achieving significantly
reduced retrieval time and memory footprint.Comment: This paper will appear as a spotlight paper in CVPR201
Deep Shape Matching
We cast shape matching as metric learning with convolutional networks. We
break the end-to-end process of image representation into two parts. Firstly,
well established efficient methods are chosen to turn the images into edge
maps. Secondly, the network is trained with edge maps of landmark images, which
are automatically obtained by a structure-from-motion pipeline. The learned
representation is evaluated on a range of different tasks, providing
improvements on challenging cases of domain generalization, generic
sketch-based image retrieval or its fine-grained counterpart. In contrast to
other methods that learn a different model per task, object category, or
domain, we use the same network throughout all our experiments, achieving
state-of-the-art results in multiple benchmarks.Comment: ECCV 201
Zero-Shot Sketch-Image Hashing
Recent studies show that large-scale sketch-based image retrieval (SBIR) can
be efficiently tackled by cross-modal binary representation learning methods,
where Hamming distance matching significantly speeds up the process of
similarity search. Providing training and test data subjected to a fixed set of
pre-defined categories, the cutting-edge SBIR and cross-modal hashing works
obtain acceptable retrieval performance. However, most of the existing methods
fail when the categories of query sketches have never been seen during
training. In this paper, the above problem is briefed as a novel but realistic
zero-shot SBIR hashing task. We elaborate the challenges of this special task
and accordingly propose a zero-shot sketch-image hashing (ZSIH) model. An
end-to-end three-network architecture is built, two of which are treated as the
binary encoders. The third network mitigates the sketch-image heterogeneity and
enhances the semantic relations among data by utilizing the Kronecker fusion
layer and graph convolution, respectively. As an important part of ZSIH, we
formulate a generative hashing scheme in reconstructing semantic knowledge
representations for zero-shot retrieval. To the best of our knowledge, ZSIH is
the first zero-shot hashing work suitable for SBIR and cross-modal search.
Comprehensive experiments are conducted on two extended datasets, i.e., Sketchy
and TU-Berlin with a novel zero-shot train-test split. The proposed model
remarkably outperforms related works.Comment: Accepted as spotlight at CVPR 201
Cross-Paced Representation Learning with Partial Curricula for Sketch-based Image Retrieval
In this paper we address the problem of learning robust cross-domain
representations for sketch-based image retrieval (SBIR). While most SBIR
approaches focus on extracting low- and mid-level descriptors for direct
feature matching, recent works have shown the benefit of learning coupled
feature representations to describe data from two related sources. However,
cross-domain representation learning methods are typically cast into non-convex
minimization problems that are difficult to optimize, leading to unsatisfactory
performance. Inspired by self-paced learning, a learning methodology designed
to overcome convergence issues related to local optima by exploiting the
samples in a meaningful order (i.e. easy to hard), we introduce the cross-paced
partial curriculum learning (CPPCL) framework. Compared with existing
self-paced learning methods which only consider a single modality and cannot
deal with prior knowledge, CPPCL is specifically designed to assess the
learning pace by jointly handling data from dual sources and modality-specific
prior information provided in the form of partial curricula. Additionally,
thanks to the learned dictionaries, we demonstrate that the proposed CPPCL
embeds robust coupled representations for SBIR. Our approach is extensively
evaluated on four publicly available datasets (i.e. CUFS, Flickr15K, QueenMary
SBIR and TU-Berlin Extension datasets), showing superior performance over
competing SBIR methods
Open Cross-Domain Visual Search
This paper addresses cross-domain visual search, where visual queries
retrieve category samples from a different domain. For example, we may want to
sketch an airplane and retrieve photographs of airplanes. Despite considerable
progress, the search occurs in a closed setting between two pre-defined
domains. In this paper, we make the step towards an open setting where multiple
visual domains are available. This notably translates into a search between any
pair of domains, from a combination of domains or within multiple domains. We
introduce a simple -- yet effective -- approach. We formulate the search as a
mapping from every visual domain to a common semantic space, where categories
are represented by hyperspherical prototypes. Open cross-domain visual search
is then performed by searching in the common semantic space, regardless of
which domains are used as source or target. Domains are combined in the common
space to search from or within multiple domains simultaneously. A separate
training of every domain-specific mapping function enables an efficient scaling
to any number of domains without affecting the search performance. We
empirically illustrate our capability to perform open cross-domain visual
search in three different scenarios. Our approach is competitive with respect
to existing closed settings, where we obtain state-of-the-art results on
several benchmarks for three sketch-based search tasks.Comment: Accepted at Computer Vision and Image Understanding (CVIU