1,597 research outputs found
Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective
This paper takes a problem-oriented perspective and presents a comprehensive
review of transfer learning methods, both shallow and deep, for cross-dataset
visual recognition. Specifically, it categorises the cross-dataset recognition
into seventeen problems based on a set of carefully chosen data and label
attributes. Such a problem-oriented taxonomy has allowed us to examine how
different transfer learning approaches tackle each problem and how well each
problem has been researched to date. The comprehensive problem-oriented review
of the advances in transfer learning with respect to the problem has not only
revealed the challenges in transfer learning for visual recognition, but also
the problems (e.g. eight of the seventeen problems) that have been scarcely
studied. This survey not only presents an up-to-date technical review for
researchers, but also a systematic approach and a reference for a machine
learning practitioner to categorise a real problem and to look up for a
possible solution accordingly
LiveSketch: Query Perturbations for Guided Sketch-based Visual Search
LiveSketch is a novel algorithm for searching large image collections using
hand-sketched queries. LiveSketch tackles the inherent ambiguity of sketch
search by creating visual suggestions that augment the query as it is drawn,
making query specification an iterative rather than one-shot process that helps
disambiguate users' search intent. Our technical contributions are: a triplet
convnet architecture that incorporates an RNN based variational autoencoder to
search for images using vector (stroke-based) queries; real-time clustering to
identify likely search intents (and so, targets within the search embedding);
and the use of backpropagation from those targets to perturb the input stroke
sequence, so suggesting alterations to the query in order to guide the search.
We show improvements in accuracy and time-to-task over contemporary baselines
using a 67M image corpus.Comment: Accepted to CVPR 201
Align before Search: Aligning Ads Image to Text for Accurate Cross-Modal Sponsored Search
Cross-Modal sponsored search displays multi-modal advertisements (ads) when
consumers look for desired products by natural language queries in search
engines. Since multi-modal ads bring complementary details for query-ads
matching, the ability to align ads-specific information in both images and
texts is crucial for accurate and flexible sponsored search. Conventional
research mainly studies from the view of modeling the implicit correlations
between images and texts for query-ads matching, ignoring the alignment of
detailed product information and resulting in suboptimal search performance.In
this work, we propose a simple alignment network for explicitly mapping
fine-grained visual parts in ads images to the corresponding text, which
leverages the co-occurrence structure consistency between vision and language
spaces without requiring expensive labeled training data. Moreover, we propose
a novel model for cross-modal sponsored search that effectively conducts the
cross-modal alignment and query-ads matching in two separate processes. In this
way, the model matches the multi-modal input in the same language space,
resulting in a superior performance with merely half of the training data. Our
model outperforms the state-of-the-art models by 2.57% on a large commercial
dataset. Besides sponsored search, our alignment method is applicable for
general cross-modal search. We study a typical cross-modal retrieval task on
the MSCOCO dataset, which achieves consistent performance improvement and
proves the generalization ability of our method. Our code is available at
https://github.com/Pter61/AlignCMSS
Cross-Paced Representation Learning with Partial Curricula for Sketch-based Image Retrieval
In this paper we address the problem of learning robust cross-domain
representations for sketch-based image retrieval (SBIR). While most SBIR
approaches focus on extracting low- and mid-level descriptors for direct
feature matching, recent works have shown the benefit of learning coupled
feature representations to describe data from two related sources. However,
cross-domain representation learning methods are typically cast into non-convex
minimization problems that are difficult to optimize, leading to unsatisfactory
performance. Inspired by self-paced learning, a learning methodology designed
to overcome convergence issues related to local optima by exploiting the
samples in a meaningful order (i.e. easy to hard), we introduce the cross-paced
partial curriculum learning (CPPCL) framework. Compared with existing
self-paced learning methods which only consider a single modality and cannot
deal with prior knowledge, CPPCL is specifically designed to assess the
learning pace by jointly handling data from dual sources and modality-specific
prior information provided in the form of partial curricula. Additionally,
thanks to the learned dictionaries, we demonstrate that the proposed CPPCL
embeds robust coupled representations for SBIR. Our approach is extensively
evaluated on four publicly available datasets (i.e. CUFS, Flickr15K, QueenMary
SBIR and TU-Berlin Extension datasets), showing superior performance over
competing SBIR methods
- …