22,733 research outputs found
Low-Shot Learning with Imprinted Weights
Human vision is able to immediately recognize novel visual categories after
seeing just one or a few training examples. We describe how to add a similar
capability to ConvNet classifiers by directly setting the final layer weights
from novel training examples during low-shot learning. We call this process
weight imprinting as it directly sets weights for a new category based on an
appropriately scaled copy of the embedding layer activations for that training
example. The imprinting process provides a valuable complement to training with
stochastic gradient descent, as it provides immediate good classification
performance and an initialization for any further fine-tuning in the future. We
show how this imprinting process is related to proxy-based embeddings. However,
it differs in that only a single imprinted weight vector is learned for each
novel category, rather than relying on a nearest-neighbor distance to training
instances as typically used with embedding methods. Our experiments show that
using averaging of imprinted weights provides better generalization than using
nearest-neighbor instance embeddings.Comment: CVPR 201
LDMNet: Low Dimensional Manifold Regularized Neural Networks
Deep neural networks have proved very successful on archetypal tasks for
which large training sets are available, but when the training data are scarce,
their performance suffers from overfitting. Many existing methods of reducing
overfitting are data-independent, and their efficacy is often limited when the
training set is very small. Data-dependent regularizations are mostly motivated
by the observation that data of interest lie close to a manifold, which is
typically hard to parametrize explicitly and often requires human input of
tangent vectors. These methods typically only focus on the geometry of the
input data, and do not necessarily encourage the networks to produce
geometrically meaningful features. To resolve this, we propose a new framework,
the Low-Dimensional-Manifold-regularized neural Network (LDMNet), which
incorporates a feature regularization method that focuses on the geometry of
both the input data and the output features. In LDMNet, we regularize the
network by encouraging the combination of the input data and the output
features to sample a collection of low dimensional manifolds, which are
searched efficiently without explicit parametrization. To achieve this, we
directly use the manifold dimension as a regularization term in a variational
functional. The resulting Euler-Lagrange equation is a Laplace-Beltrami
equation over a point cloud, which is solved by the point integral method
without increasing the computational complexity. We demonstrate two benefits of
LDMNet in the experiments. First, we show that LDMNet significantly outperforms
widely-used network regularizers such as weight decay and DropOut. Second, we
show that LDMNet can be designed to extract common features of an object imaged
via different modalities, which proves to be very useful in real-world
applications such as cross-spectral face recognition
An Efficient Dual Approach to Distance Metric Learning
Distance metric learning is of fundamental interest in machine learning
because the distance metric employed can significantly affect the performance
of many learning methods. Quadratic Mahalanobis metric learning is a popular
approach to the problem, but typically requires solving a semidefinite
programming (SDP) problem, which is computationally expensive. Standard
interior-point SDP solvers typically have a complexity of (with
the dimension of input data), and can thus only practically solve problems
exhibiting less than a few thousand variables. Since the number of variables is
, this implies a limit upon the size of problem that can
practically be solved of around a few hundred dimensions. The complexity of the
popular quadratic Mahalanobis metric learning approach thus limits the size of
problem to which metric learning can be applied. Here we propose a
significantly more efficient approach to the metric learning problem based on
the Lagrange dual formulation of the problem. The proposed formulation is much
simpler to implement, and therefore allows much larger Mahalanobis metric
learning problems to be solved. The time complexity of the proposed method is
, which is significantly lower than that of the SDP approach.
Experiments on a variety of datasets demonstrate that the proposed method
achieves an accuracy comparable to the state-of-the-art, but is applicable to
significantly larger problems. We also show that the proposed method can be
applied to solve more general Frobenius-norm regularized SDP problems
approximately
- …