39,308 research outputs found
Semi-Supervised Learning with Scarce Annotations
While semi-supervised learning (SSL) algorithms provide an efficient way to
make use of both labelled and unlabelled data, they generally struggle when the
number of annotated samples is very small. In this work, we consider the
problem of SSL multi-class classification with very few labelled instances. We
introduce two key ideas. The first is a simple but effective one: we leverage
the power of transfer learning among different tasks and self-supervision to
initialize a good representation of the data without making use of any label.
The second idea is a new algorithm for SSL that can exploit well such a
pre-trained representation.
The algorithm works by alternating two phases, one fitting the labelled
points and one fitting the unlabelled ones, with carefully-controlled
information flow between them. The benefits are greatly reducing overfitting of
the labelled data and avoiding issue with balancing labelled and unlabelled
losses during training. We show empirically that this method can successfully
train competitive models with as few as 10 labelled data points per class. More
in general, we show that the idea of bootstrapping features using
self-supervised learning always improves SSL on standard benchmarks. We show
that our algorithm works increasingly well compared to other methods when
refining from other tasks or datasets.Comment: Workshop on Deep Vision, CVPR 202
Score Function Features for Discriminative Learning: Matrix and Tensor Framework
Feature learning forms the cornerstone for tackling challenging learning
problems in domains such as speech, computer vision and natural language
processing. In this paper, we consider a novel class of matrix and
tensor-valued features, which can be pre-trained using unlabeled samples. We
present efficient algorithms for extracting discriminative information, given
these pre-trained features and labeled samples for any related task. Our class
of features are based on higher-order score functions, which capture local
variations in the probability density function of the input. We establish a
theoretical framework to characterize the nature of discriminative information
that can be extracted from score-function features, when used in conjunction
with labeled samples. We employ efficient spectral decomposition algorithms (on
matrices and tensors) for extracting discriminative components. The advantage
of employing tensor-valued features is that we can extract richer
discriminative information in the form of an overcomplete representations.
Thus, we present a novel framework for employing generative models of the input
for discriminative learning.Comment: 29 page
- …