6,440 research outputs found
Generalization Error in Deep Learning
Deep learning models have lately shown great performance in various fields
such as computer vision, speech recognition, speech translation, and natural
language processing. However, alongside their state-of-the-art performance, it
is still generally unclear what is the source of their generalization ability.
Thus, an important question is what makes deep neural networks able to
generalize well from the training set to new data. In this article, we provide
an overview of the existing theory and bounds for the characterization of the
generalization error of deep neural networks, combining both classical and more
recent theoretical and empirical results
Discriminative Nonparametric Latent Feature Relational Models with Data Augmentation
We present a discriminative nonparametric latent feature relational model
(LFRM) for link prediction to automatically infer the dimensionality of latent
features. Under the generic RegBayes (regularized Bayesian inference)
framework, we handily incorporate the prediction loss with probabilistic
inference of a Bayesian model; set distinct regularization parameters for
different types of links to handle the imbalance issue in real networks; and
unify the analysis of both the smooth logistic log-loss and the piecewise
linear hinge loss. For the nonconjugate posterior inference, we present a
simple Gibbs sampler via data augmentation, without making restricting
assumptions as done in variational methods. We further develop an approximate
sampler using stochastic gradient Langevin dynamics to handle large networks
with hundreds of thousands of entities and millions of links, orders of
magnitude larger than what existing LFRM models can process. Extensive studies
on various real networks show promising performance.Comment: Accepted by AAAI 201
- …