616 research outputs found
Low-Shot Learning with Imprinted Weights
Human vision is able to immediately recognize novel visual categories after
seeing just one or a few training examples. We describe how to add a similar
capability to ConvNet classifiers by directly setting the final layer weights
from novel training examples during low-shot learning. We call this process
weight imprinting as it directly sets weights for a new category based on an
appropriately scaled copy of the embedding layer activations for that training
example. The imprinting process provides a valuable complement to training with
stochastic gradient descent, as it provides immediate good classification
performance and an initialization for any further fine-tuning in the future. We
show how this imprinting process is related to proxy-based embeddings. However,
it differs in that only a single imprinted weight vector is learned for each
novel category, rather than relying on a nearest-neighbor distance to training
instances as typically used with embedding methods. Our experiments show that
using averaging of imprinted weights provides better generalization than using
nearest-neighbor instance embeddings.Comment: CVPR 201
Ranked List Loss for Deep Metric Learning
The objective of deep metric learning (DML) is to learn embeddings that can
capture semantic similarity and dissimilarity information among data points.
Existing pairwise or tripletwise loss functions used in DML are known to suffer
from slow convergence due to a large proportion of trivial pairs or triplets as
the model improves. To improve this, ranking-motivated structured losses are
proposed recently to incorporate multiple examples and exploit the structured
information among them. They converge faster and achieve state-of-the-art
performance. In this work, we unveil two limitations of existing
ranking-motivated structured losses and propose a novel ranked list loss to
solve both of them. First, given a query, only a fraction of data points is
incorporated to build the similarity structure. Consequently, some useful
examples are ignored and the structure is less informative. To address this, we
propose to build a set-based similarity structure by exploiting all instances
in the gallery. The learning setting can be interpreted as few-shot retrieval:
given a mini-batch, every example is iteratively used as a query, and the rest
ones compose the gallery to search, i.e., the support set in few-shot setting.
The rest examples are split into a positive set and a negative set. For every
mini-batch, the learning objective of ranked list loss is to make the query
closer to the positive set than to the negative set by a margin. Second,
previous methods aim to pull positive pairs as close as possible in the
embedding space. As a result, the intraclass data distribution tends to be
extremely compressed. In contrast, we propose to learn a hypersphere for each
class in order to preserve useful similarity structure inside it, which
functions as regularisation. Extensive experiments demonstrate the superiority
of our proposal by comparing with the state-of-the-art methods.Comment: Accepted to T-PAMI. Therefore, to read the offical version, please go
to IEEE Xplore. Fine-grained image retrieval task. Our source code is
available online: https://github.com/XinshaoAmosWang/Ranked-List-Loss-for-DM
- …