1,486 research outputs found
Open Cross-Domain Visual Search
This paper addresses cross-domain visual search, where visual queries
retrieve category samples from a different domain. For example, we may want to
sketch an airplane and retrieve photographs of airplanes. Despite considerable
progress, the search occurs in a closed setting between two pre-defined
domains. In this paper, we make the step towards an open setting where multiple
visual domains are available. This notably translates into a search between any
pair of domains, from a combination of domains or within multiple domains. We
introduce a simple -- yet effective -- approach. We formulate the search as a
mapping from every visual domain to a common semantic space, where categories
are represented by hyperspherical prototypes. Open cross-domain visual search
is then performed by searching in the common semantic space, regardless of
which domains are used as source or target. Domains are combined in the common
space to search from or within multiple domains simultaneously. A separate
training of every domain-specific mapping function enables an efficient scaling
to any number of domains without affecting the search performance. We
empirically illustrate our capability to perform open cross-domain visual
search in three different scenarios. Our approach is competitive with respect
to existing closed settings, where we obtain state-of-the-art results on
several benchmarks for three sketch-based search tasks.Comment: Accepted at Computer Vision and Image Understanding (CVIU
Sketch-an-Anchor: Sub-epoch Fast Model Adaptation for Zero-shot Sketch-based Image Retrieval
Sketch-an-Anchor is a novel method to train state-of-the-art Zero-shot
Sketch-based Image Retrieval (ZSSBIR) models in under an epoch. Most studies
break down the problem of ZSSBIR into two parts: domain alignment between
images and sketches, inherited from SBIR, and generalization to unseen data,
inherent to the zero-shot protocol. We argue one of these problems can be
considerably simplified and re-frame the ZSSBIR problem around the
already-stellar yet underexplored Zero-shot Image-based Retrieval performance
of off-the-shelf models. Our fast-converging model keeps the single-domain
performance while learning to extract similar representations from sketches. To
this end we introduce our Semantic Anchors -- guiding embeddings learned from
word-based semantic spaces and features from off-the-shelf models -- and
combine them with our novel Anchored Contrastive Loss. Empirical evidence shows
we can achieve state-of-the-art performance on all benchmark datasets while
training for 100x less iterations than other methods
Progressive Domain-Independent Feature Decomposition Network for Zero-Shot Sketch-Based Image Retrieval
Zero-shot sketch-based image retrieval (ZS-SBIR) is a specific cross-modal
retrieval task for searching natural images given free-hand sketches under the
zero-shot scenario. Most existing methods solve this problem by simultaneously
projecting visual features and semantic supervision into a low-dimensional
common space for efficient retrieval. However, such low-dimensional projection
destroys the completeness of semantic knowledge in original semantic space, so
that it is unable to transfer useful knowledge well when learning semantic from
different modalities. Moreover, the domain information and semantic information
are entangled in visual features, which is not conducive for cross-modal
matching since it will hinder the reduction of domain gap between sketch and
image. In this paper, we propose a Progressive Domain-independent Feature
Decomposition (PDFD) network for ZS-SBIR. Specifically, with the supervision of
original semantic knowledge, PDFD decomposes visual features into domain
features and semantic ones, and then the semantic features are projected into
common space as retrieval features for ZS-SBIR. The progressive projection
strategy maintains strong semantic supervision. Besides, to guarantee the
retrieval features to capture clean and complete semantic information, the
cross-reconstruction loss is introduced to encourage that any combinations of
retrieval features and domain features can reconstruct the visual features.
Extensive experiments demonstrate the superiority of our PDFD over
state-of-the-art competitors
Cycle-Consistent Deep Generative Hashing for Cross-Modal Retrieval
In this paper, we propose a novel deep generative approach to cross-modal
retrieval to learn hash functions in the absence of paired training samples
through the cycle consistency loss. Our proposed approach employs adversarial
training scheme to lean a couple of hash functions enabling translation between
modalities while assuming the underlying semantic relationship. To induce the
hash codes with semantics to the input-output pair, cycle consistency loss is
further proposed upon the adversarial training to strengthen the correlations
between inputs and corresponding outputs. Our approach is generative to learn
hash functions such that the learned hash codes can maximally correlate each
input-output correspondence, meanwhile can also regenerate the inputs so as to
minimize the information loss. The learning to hash embedding is thus performed
to jointly optimize the parameters of the hash functions across modalities as
well as the associated generative models. Extensive experiments on a variety of
large-scale cross-modal data sets demonstrate that our proposed method achieves
better retrieval results than the state-of-the-arts.Comment: To appeared on IEEE Trans. Image Processing. arXiv admin note: text
overlap with arXiv:1703.10593 by other author
ACNet: Approaching-and-Centralizing Network for Zero-Shot Sketch-Based Image Retrieval
The huge domain gap between sketches and photos and the highly abstract
sketch representations pose challenges for sketch-based image retrieval
(\underline{SBIR}). The zero-shot sketch-based image retrieval
(\underline{ZS-SBIR}) is more generic and practical but poses an even greater
challenge because of the additional knowledge gap between the seen and unseen
categories. To simultaneously mitigate both gaps, we propose an
\textbf{A}pproaching-and-\textbf{C}entralizing \textbf{Net}work (termed
"\textbf{ACNet}") to jointly optimize sketch-to-photo synthesis and the image
retrieval. The retrieval module guides the synthesis module to generate large
amounts of diverse photo-like images which gradually approach the photo domain,
and thus better serve the retrieval module than ever to learn domain-agnostic
representations and category-agnostic common knowledge for generalizing to
unseen categories. These diverse images generated with retrieval guidance can
effectively alleviate the overfitting problem troubling concrete
category-specific training samples with high gradients. We also discover the
use of proxy-based NormSoftmax loss is effective in the zero-shot setting
because its centralizing effect can stabilize our joint training and promote
the generalization ability to unseen categories. Our approach is simple yet
effective, which achieves state-of-the-art performance on two widely used
ZS-SBIR datasets and surpasses previous methods by a large margin.Comment: the paper is under consideration at IEEE Transactions on Circuits and
Systems for Video Technolog
CLIP for All Things Zero-Shot Sketch-Based Image Retrieval, Fine-Grained or Not
In this paper, we leverage CLIP for zero-shot sketch based image retrieval
(ZS-SBIR). We are largely inspired by recent advances on foundation models and
the unparalleled generalisation ability they seem to offer, but for the first
time tailor it to benefit the sketch community. We put forward novel designs on
how best to achieve this synergy, for both the category setting and the
fine-grained setting ("all"). At the very core of our solution is a prompt
learning setup. First we show just via factoring in sketch-specific prompts, we
already have a category-level ZS-SBIR system that overshoots all prior arts, by
a large margin (24.8%) - a great testimony on studying the CLIP and ZS-SBIR
synergy. Moving onto the fine-grained setup is however trickier, and requires a
deeper dive into this synergy. For that, we come up with two specific designs
to tackle the fine-grained matching nature of the problem: (i) an additional
regularisation loss to ensure the relative separation between sketches and
photos is uniform across categories, which is not the case for the gold
standard standalone triplet loss, and (ii) a clever patch shuffling technique
to help establishing instance-level structural correspondences between
sketch-photo pairs. With these designs, we again observe significant
performance gains in the region of 26.9% over previous state-of-the-art. The
take-home message, if any, is the proposed CLIP and prompt learning paradigm
carries great promise in tackling other sketch-related tasks (not limited to
ZS-SBIR) where data scarcity remains a great challenge. Project page:
https://aneeshan95.github.io/Sketch_LVM/Comment: Accepted in CVPR 2023. Project page available at
https://aneeshan95.github.io/Sketch_LVM
- …