216,120 research outputs found
Deep Metric Learning and Image Classification with Nearest Neighbour Gaussian Kernels
We present a Gaussian kernel loss function and training algorithm for
convolutional neural networks that can be directly applied to both distance
metric learning and image classification problems. Our method treats all
training features from a deep neural network as Gaussian kernel centres and
computes loss by summing the influence of a feature's nearby centres in the
feature embedding space. Our approach is made scalable by treating it as an
approximate nearest neighbour search problem. We show how to make end-to-end
learning feasible, resulting in a well formed embedding space, in which
semantically related instances are likely to be located near one another,
regardless of whether or not the network was trained on those classes. Our
approach outperforms state-of-the-art deep metric learning approaches on
embedding learning challenges, as well as conventional softmax classification
on several datasets.Comment: Accepted in the International Conference on Image Processing (ICIP)
2018. Formerly titled Nearest Neighbour Radial Basis Function Solvers for
Deep Neural Network
Imitation Learning with Sinkhorn Distances
Imitation learning algorithms have been interpreted as variants of divergence
minimization problems. The ability to compare occupancy measures between
experts and learners is crucial in their effectiveness in learning from
demonstrations. In this paper, we present tractable solutions by formulating
imitation learning as minimization of the Sinkhorn distance between occupancy
measures. The formulation combines the valuable properties of optimal transport
metrics in comparing non-overlapping distributions with a cosine distance cost
defined in an adversarially learned feature space. This leads to a highly
discriminative critic network and optimal transport plan that subsequently
guide imitation learning. We evaluate the proposed approach using both the
reward metric and the Sinkhorn distance metric on a number of MuJoCo
experiments
Active Learning of Ordinal Embeddings: A User Study on Football Data
Humans innately measure distance between instances in an unlabeled dataset
using an unknown similarity function. Distance metrics can only serve as proxy
for similarity in information retrieval of similar instances. Learning a good
similarity function from human annotations improves the quality of retrievals.
This work uses deep metric learning to learn these user-defined similarity
functions from few annotations for a large football trajectory dataset. We
adapt an entropy-based active learning method with recent work from triplet
mining to collect easy-to-answer but still informative annotations from human
participants and use them to train a deep convolutional network that
generalizes to unseen samples. Our user study shows that our approach improves
the quality of the information retrieval compared to a previous deep metric
learning approach that relies on a Siamese network. Specifically, we shed light
on the strengths and weaknesses of passive sampling heuristics and active
learners alike by analyzing the participants' response efficacy. To this end,
we collect accuracy, algorithmic time complexity, the participants' fatigue and
time-to-response, qualitative self-assessment and statements, as well as the
effects of mixed-expertise annotators and their consistency on model
performance and transfer-learning.Comment: 23 pages, 17 figure
- …