10,357 research outputs found
Deep Neural Networks - A Brief History
Introduction to deep neural networks and their history.Comment: 14 pages, 14 figure
Distributionally Robust Semi-Supervised Learning for People-Centric Sensing
Semi-supervised learning is crucial for alleviating labelling burdens in
people-centric sensing. However, human-generated data inherently suffer from
distribution shift in semi-supervised learning due to the diverse biological
conditions and behavior patterns of humans. To address this problem, we propose
a generic distributionally robust model for semi-supervised learning on
distributionally shifted data. Considering both the discrepancy and the
consistency between the labeled data and the unlabeled data, we learn the
latent features that reduce person-specific discrepancy and preserve
task-specific consistency. We evaluate our model in a variety of people-centric
recognition tasks on real-world datasets, including intention recognition,
activity recognition, muscular movement recognition and gesture recognition.
The experiment results demonstrate that the proposed model outperforms the
state-of-the-art methods.Comment: 8 pages, accepted by AAAI201
Adversarial Variational Embedding for Robust Semi-supervised Learning
Semi-supervised learning is sought for leveraging the unlabelled data when
labelled data is difficult or expensive to acquire. Deep generative models
(e.g., Variational Autoencoder (VAE)) and semisupervised Generative Adversarial
Networks (GANs) have recently shown promising performance in semi-supervised
classification for the excellent discriminative representing ability. However,
the latent code learned by the traditional VAE is not exclusive (repeatable)
for a specific input sample, which prevents it from excellent classification
performance. In particular, the learned latent representation depends on a
non-exclusive component which is stochastically sampled from the prior
distribution. Moreover, the semi-supervised GAN models generate data from
pre-defined distribution (e.g., Gaussian noises) which is independent of the
input data distribution and may obstruct the convergence and is difficult to
control the distribution of the generated data. To address the aforementioned
issues, we propose a novel Adversarial Variational Embedding (AVAE) framework
for robust and effective semi-supervised learning to leverage both the
advantage of GAN as a high quality generative model and VAE as a posterior
distribution learner. The proposed approach first produces an exclusive latent
code by the model which we call VAE++, and meanwhile, provides a meaningful
prior distribution for the generator of GAN. The proposed approach is evaluated
over four different real-world applications and we show that our method
outperforms the state-of-the-art models, which confirms that the combination of
VAE++ and GAN can provide significant improvements in semisupervised
classification.Comment: 9 pages, Accepted by Research Track in KDD 201
- …