24,326 research outputs found
Unsupervised feature learning with discriminative encoder
In recent years, deep discriminative models have achieved extraordinary
performance on supervised learning tasks, significantly outperforming their
generative counterparts. However, their success relies on the presence of a
large amount of labeled data. How can one use the same discriminative models
for learning useful features in the absence of labels? We address this question
in this paper, by jointly modeling the distribution of data and latent features
in a manner that explicitly assigns zero probability to unobserved data. Rather
than maximizing the marginal probability of observed data, we maximize the
joint probability of the data and the latent features using a two step EM-like
procedure. To prevent the model from overfitting to our initial selection of
latent features, we use adversarial regularization. Depending on the task, we
allow the latent features to be one-hot or real-valued vectors and define a
suitable prior on the features. For instance, one-hot features correspond to
class labels and are directly used for the unsupervised and semi-supervised
classification task, whereas real-valued feature vectors are fed as input to
simple classifiers for auxiliary supervised discrimination tasks. The proposed
model, which we dub discriminative encoder (or DisCoder), is flexible in the
type of latent features that it can capture. The proposed model achieves
state-of-the-art performance on several challenging tasks.Comment: 10 pages, 4 figures, International Conference on Data Mining, 201
SEVEN: Deep Semi-supervised Verification Networks
Verification determines whether two samples belong to the same class or not,
and has important applications such as face and fingerprint verification, where
thousands or millions of categories are present but each category has scarce
labeled examples, presenting two major challenges for existing deep learning
models. We propose a deep semi-supervised model named SEmi-supervised
VErification Network (SEVEN) to address these challenges. The model consists of
two complementary components. The generative component addresses the lack of
supervision within each category by learning general salient structures from a
large amount of data across categories. The discriminative component exploits
the learned general features to mitigate the lack of supervision within
categories, and also directs the generative component to find more informative
structures of the whole data manifold. The two components are tied together in
SEVEN to allow an end-to-end training of the two components. Extensive
experiments on four verification tasks demonstrate that SEVEN significantly
outperforms other state-of-the-art deep semi-supervised techniques when labeled
data are in short supply. Furthermore, SEVEN is competitive with fully
supervised baselines trained with a larger amount of labeled data. It indicates
the importance of the generative component in SEVEN.Comment: 7 pages, 2 figures, accepted to the 2017 International Joint
Conference on Artificial Intelligence (IJCAI-17
Adversarial Variational Embedding for Robust Semi-supervised Learning
Semi-supervised learning is sought for leveraging the unlabelled data when
labelled data is difficult or expensive to acquire. Deep generative models
(e.g., Variational Autoencoder (VAE)) and semisupervised Generative Adversarial
Networks (GANs) have recently shown promising performance in semi-supervised
classification for the excellent discriminative representing ability. However,
the latent code learned by the traditional VAE is not exclusive (repeatable)
for a specific input sample, which prevents it from excellent classification
performance. In particular, the learned latent representation depends on a
non-exclusive component which is stochastically sampled from the prior
distribution. Moreover, the semi-supervised GAN models generate data from
pre-defined distribution (e.g., Gaussian noises) which is independent of the
input data distribution and may obstruct the convergence and is difficult to
control the distribution of the generated data. To address the aforementioned
issues, we propose a novel Adversarial Variational Embedding (AVAE) framework
for robust and effective semi-supervised learning to leverage both the
advantage of GAN as a high quality generative model and VAE as a posterior
distribution learner. The proposed approach first produces an exclusive latent
code by the model which we call VAE++, and meanwhile, provides a meaningful
prior distribution for the generator of GAN. The proposed approach is evaluated
over four different real-world applications and we show that our method
outperforms the state-of-the-art models, which confirms that the combination of
VAE++ and GAN can provide significant improvements in semisupervised
classification.Comment: 9 pages, Accepted by Research Track in KDD 201
- …