3,770 research outputs found
LiveSketch: Query Perturbations for Guided Sketch-based Visual Search
LiveSketch is a novel algorithm for searching large image collections using
hand-sketched queries. LiveSketch tackles the inherent ambiguity of sketch
search by creating visual suggestions that augment the query as it is drawn,
making query specification an iterative rather than one-shot process that helps
disambiguate users' search intent. Our technical contributions are: a triplet
convnet architecture that incorporates an RNN based variational autoencoder to
search for images using vector (stroke-based) queries; real-time clustering to
identify likely search intents (and so, targets within the search embedding);
and the use of backpropagation from those targets to perturb the input stroke
sequence, so suggesting alterations to the query in order to guide the search.
We show improvements in accuracy and time-to-task over contemporary baselines
using a 67M image corpus.Comment: Accepted to CVPR 201
Deep Shape Matching
We cast shape matching as metric learning with convolutional networks. We
break the end-to-end process of image representation into two parts. Firstly,
well established efficient methods are chosen to turn the images into edge
maps. Secondly, the network is trained with edge maps of landmark images, which
are automatically obtained by a structure-from-motion pipeline. The learned
representation is evaluated on a range of different tasks, providing
improvements on challenging cases of domain generalization, generic
sketch-based image retrieval or its fine-grained counterpart. In contrast to
other methods that learn a different model per task, object category, or
domain, we use the same network throughout all our experiments, achieving
state-of-the-art results in multiple benchmarks.Comment: ECCV 201
End-to-End Photo-Sketch Generation via Fully Convolutional Representation Learning
Sketch-based face recognition is an interesting task in vision and multimedia
research, yet it is quite challenging due to the great difference between face
photos and sketches. In this paper, we propose a novel approach for
photo-sketch generation, aiming to automatically transform face photos into
detail-preserving personal sketches. Unlike the traditional models synthesizing
sketches based on a dictionary of exemplars, we develop a fully convolutional
network to learn the end-to-end photo-sketch mapping. Our approach takes whole
face photos as inputs and directly generates the corresponding sketch images
with efficient inference and learning, in which the architecture are stacked by
only convolutional kernels of very small sizes. To well capture the person
identity during the photo-sketch transformation, we define our optimization
objective in the form of joint generative-discriminative minimization. In
particular, a discriminative regularization term is incorporated into the
photo-sketch generation, enhancing the discriminability of the generated person
sketches against other individuals. Extensive experiments on several standard
benchmarks suggest that our approach outperforms other state-of-the-art methods
in both photo-sketch generation and face sketch verification.Comment: 8 pages, 6 figures. Proceeding in ACM International Conference on
Multimedia Retrieval (ICMR), 201
Exploiting Unlabelled Photos for Stronger Fine-Grained SBIR
This paper advances the fine-grained sketch-based image retrieval (FG-SBIR)
literature by putting forward a strong baseline that overshoots prior
state-of-the-arts by ~11%. This is not via complicated design though, but by
addressing two critical issues facing the community (i) the gold standard
triplet loss does not enforce holistic latent space geometry, and (ii) there
are never enough sketches to train a high accuracy model. For the former, we
propose a simple modification to the standard triplet loss, that explicitly
enforces separation amongst photos/sketch instances. For the latter, we put
forward a novel knowledge distillation module can leverage photo data for model
training. Both modules are then plugged into a novel plug-n-playable training
paradigm that allows for more stable training. More specifically, for (i) we
employ an intra-modal triplet loss amongst sketches to bring sketches of the
same instance closer from others, and one more amongst photos to push away
different photo instances while bringing closer a structurally augmented
version of the same photo (offering a gain of ~4-6%). To tackle (ii), we first
pre-train a teacher on the large set of unlabelled photos over the
aforementioned intra-modal photo triplet loss. Then we distill the contextual
similarity present amongst the instances in the teacher's embedding space to
that in the student's embedding space, by matching the distribution over
inter-feature distances of respective samples in both embedding spaces
(delivering a further gain of ~4-5%). Apart from outperforming prior arts
significantly, our model also yields satisfactory results on generalising to
new classes. Project page: https://aneeshan95.github.io/Sketch_PVT/Comment: Accepted in CVPR 2023. Project page available at
https://aneeshan95.github.io/Sketch_PVT
- …