1,141 research outputs found
Energy Confused Adversarial Metric Learning for Zero-Shot Image Retrieval and Clustering
Deep metric learning has been widely applied in many computer vision tasks,
and recently, it is more attractive in \emph{zero-shot image retrieval and
clustering}(ZSRC) where a good embedding is requested such that the unseen
classes can be distinguished well. Most existing works deem this 'good'
embedding just to be the discriminative one and thus race to devise powerful
metric objectives or hard-sample mining strategies for leaning discriminative
embedding. However, in this paper, we first emphasize that the generalization
ability is a core ingredient of this 'good' embedding as well and largely
affects the metric performance in zero-shot settings as a matter of fact. Then,
we propose the Energy Confused Adversarial Metric Learning(ECAML) framework to
explicitly optimize a robust metric. It is mainly achieved by introducing an
interesting Energy Confusion regularization term, which daringly breaks away
from the traditional metric learning idea of discriminative objective devising,
and seeks to 'confuse' the learned model so as to encourage its generalization
ability by reducing overfitting on the seen classes. We train this confusion
term together with the conventional metric objective in an adversarial manner.
Although it seems weird to 'confuse' the network, we show that our ECAML indeed
serves as an efficient regularization technique for metric learning and is
applicable to various conventional metric methods. This paper empirically and
experimentally demonstrates the importance of learning embedding with good
generalization, achieving state-of-the-art performances on the popular CUB,
CARS, Stanford Online Products and In-Shop datasets for ZSRC tasks.
\textcolor[rgb]{1, 0, 0}{Code available at http://www.bhchen.cn/}.Comment: AAAI 2019, Spotligh
Relative Comparison Kernel Learning with Auxiliary Kernels
In this work we consider the problem of learning a positive semidefinite
kernel matrix from relative comparisons of the form: "object A is more similar
to object B than it is to C", where comparisons are given by humans. Existing
solutions to this problem assume many comparisons are provided to learn a high
quality kernel. However, this can be considered unrealistic for many real-world
tasks since relative assessments require human input, which is often costly or
difficult to obtain. Because of this, only a limited number of these
comparisons may be provided. In this work, we explore methods for aiding the
process of learning a kernel with the help of auxiliary kernels built from more
easily extractable information regarding the relationships among objects. We
propose a new kernel learning approach in which the target kernel is defined as
a conic combination of auxiliary kernels and a kernel whose elements are
learned directly. We formulate a convex optimization to solve for this target
kernel that adds only minor overhead to methods that use no auxiliary
information. Empirical results show that in the presence of few training
relative comparisons, our method can learn kernels that generalize to more
out-of-sample comparisons than methods that do not utilize auxiliary
information, as well as similar methods that learn metrics over objects
LiveSketch: Query Perturbations for Guided Sketch-based Visual Search
LiveSketch is a novel algorithm for searching large image collections using
hand-sketched queries. LiveSketch tackles the inherent ambiguity of sketch
search by creating visual suggestions that augment the query as it is drawn,
making query specification an iterative rather than one-shot process that helps
disambiguate users' search intent. Our technical contributions are: a triplet
convnet architecture that incorporates an RNN based variational autoencoder to
search for images using vector (stroke-based) queries; real-time clustering to
identify likely search intents (and so, targets within the search embedding);
and the use of backpropagation from those targets to perturb the input stroke
sequence, so suggesting alterations to the query in order to guide the search.
We show improvements in accuracy and time-to-task over contemporary baselines
using a 67M image corpus.Comment: Accepted to CVPR 201
- …