61,310 research outputs found
Truncated Variational EM for Semi-Supervised Neural Simpletrons
Inference and learning for probabilistic generative networks is often very
challenging and typically prevents scalability to as large networks as used for
deep discriminative approaches. To obtain efficiently trainable, large-scale
and well performing generative networks for semi-supervised learning, we here
combine two recent developments: a neural network reformulation of hierarchical
Poisson mixtures (Neural Simpletrons), and a novel truncated variational EM
approach (TV-EM). TV-EM provides theoretical guarantees for learning in
generative networks, and its application to Neural Simpletrons results in
particularly compact, yet approximately optimal, modifications of learning
equations. If applied to standard benchmarks, we empirically find, that
learning converges in fewer EM iterations, that the complexity per EM iteration
is reduced, and that final likelihood values are higher on average. For the
task of classification on data sets with few labels, learning improvements
result in consistently lower error rates if compared to applications without
truncation. Experiments on the MNIST data set herein allow for comparison to
standard and state-of-the-art models in the semi-supervised setting. Further
experiments on the NIST SD19 data set show the scalability of the approach when
a manifold of additional unlabeled data is available
Lightweight Probabilistic Deep Networks
Even though probabilistic treatments of neural networks have a long history,
they have not found widespread use in practice. Sampling approaches are often
too slow already for simple networks. The size of the inputs and the depth of
typical CNN architectures in computer vision only compound this problem.
Uncertainty in neural networks has thus been largely ignored in practice,
despite the fact that it may provide important information about the
reliability of predictions and the inner workings of the network. In this
paper, we introduce two lightweight approaches to making supervised learning
with probabilistic deep networks practical: First, we suggest probabilistic
output layers for classification and regression that require only minimal
changes to existing networks. Second, we employ assumed density filtering and
show that activation uncertainties can be propagated in a practical fashion
through the entire network, again with minor changes. Both probabilistic
networks retain the predictive power of the deterministic counterpart, but
yield uncertainties that correlate well with the empirical error induced by
their predictions. Moreover, the robustness to adversarial examples is
significantly increased.Comment: To appear at CVPR 201
Variational Deep Semantic Hashing for Text Documents
As the amount of textual data has been rapidly increasing over the past
decade, efficient similarity search methods have become a crucial component of
large-scale information retrieval systems. A popular strategy is to represent
original data samples by compact binary codes through hashing. A spectrum of
machine learning methods have been utilized, but they often lack expressiveness
and flexibility in modeling to learn effective representations. The recent
advances of deep learning in a wide range of applications has demonstrated its
capability to learn robust and powerful feature representations for complex
data. Especially, deep generative models naturally combine the expressiveness
of probabilistic generative models with the high capacity of deep neural
networks, which is very suitable for text modeling. However, little work has
leveraged the recent progress in deep learning for text hashing.
In this paper, we propose a series of novel deep document generative models
for text hashing. The first proposed model is unsupervised while the second one
is supervised by utilizing document labels/tags for hashing. The third model
further considers document-specific factors that affect the generation of
words. The probabilistic generative formulation of the proposed models provides
a principled framework for model extension, uncertainty estimation, simulation,
and interpretability. Based on variational inference and reparameterization,
the proposed models can be interpreted as encoder-decoder deep neural networks
and thus they are capable of learning complex nonlinear distributed
representations of the original documents. We conduct a comprehensive set of
experiments on four public testbeds. The experimental results have demonstrated
the effectiveness of the proposed supervised learning models for text hashing.Comment: 11 pages, 4 figure
Multi-annotator Deep Learning: A Probabilistic Framework for Classification
Solving complex classification tasks using deep neural networks typically
requires large amounts of annotated data. However, corresponding class labels
are noisy when provided by error-prone annotators, e.g., crowd workers.
Training standard deep neural networks leads to subpar performances in such
multi-annotator supervised learning settings. We address this issue by
presenting a probabilistic training framework named multi-annotator deep
learning (MaDL). A ground truth and an annotator performance model are jointly
trained in an end-to-end learning approach. The ground truth model learns to
predict instances' true class labels, while the annotator performance model
infers probabilistic estimates of annotators' performances. A modular network
architecture enables us to make varying assumptions regarding annotators'
performances, e.g., an optional class or instance dependency. Further, we learn
annotator embeddings to estimate annotators' densities within a latent space as
proxies of their potentially correlated annotations. Together with a weighted
loss function, we improve the learning from correlated annotation patterns. In
a comprehensive evaluation, we examine three research questions about
multi-annotator supervised learning. Our findings indicate MaDL's
state-of-the-art performance and robustness against many correlated, spamming
annotators
- …