199,897 research outputs found
In All Likelihood, Deep Belief Is Not Enough
Statistical models of natural stimuli provide an important tool for
researchers in the fields of machine learning and computational neuroscience. A
canonical way to quantitatively assess and compare the performance of
statistical models is given by the likelihood. One class of statistical models
which has recently gained increasing popularity and has been applied to a
variety of complex data are deep belief networks. Analyses of these models,
however, have been typically limited to qualitative analyses based on samples
due to the computationally intractable nature of the model likelihood.
Motivated by these circumstances, the present article provides a consistent
estimator for the likelihood that is both computationally tractable and simple
to apply in practice. Using this estimator, a deep belief network which has
been suggested for the modeling of natural image patches is quantitatively
investigated and compared to other models of natural image patches. Contrary to
earlier claims based on qualitative results, the results presented in this
article provide evidence that the model under investigation is not a
particularly good model for natural image
Generative Image Modeling Using Spatial LSTMs
Modeling the distribution of natural images is challenging, partly because of
strong statistical dependencies which can extend over hundreds of pixels.
Recurrent neural networks have been successful in capturing long-range
dependencies in a number of problems but only recently have found their way
into generative image models. We here introduce a recurrent image model based
on multi-dimensional long short-term memory units which are particularly suited
for image modeling due to their spatial structure. Our model scales to images
of arbitrary size and its likelihood is computationally tractable. We find that
it outperforms the state of the art in quantitative comparisons on several
image datasets and produces promising results when used for texture synthesis
and inpainting
Layer-wise learning of deep generative models
When using deep, multi-layered architectures to build generative models of
data, it is difficult to train all layers at once. We propose a layer-wise
training procedure admitting a performance guarantee compared to the global
optimum. It is based on an optimistic proxy of future performance, the best
latent marginal. We interpret auto-encoders in this setting as generative
models, by showing that they train a lower bound of this criterion. We test the
new learning procedure against a state of the art method (stacked RBMs), and
find it to improve performance. Both theory and experiments highlight the
importance, when training deep architectures, of using an inference model (from
data to hidden variables) richer than the generative model (from hidden
variables to data)
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
Lightweight Probabilistic Deep Networks
Even though probabilistic treatments of neural networks have a long history,
they have not found widespread use in practice. Sampling approaches are often
too slow already for simple networks. The size of the inputs and the depth of
typical CNN architectures in computer vision only compound this problem.
Uncertainty in neural networks has thus been largely ignored in practice,
despite the fact that it may provide important information about the
reliability of predictions and the inner workings of the network. In this
paper, we introduce two lightweight approaches to making supervised learning
with probabilistic deep networks practical: First, we suggest probabilistic
output layers for classification and regression that require only minimal
changes to existing networks. Second, we employ assumed density filtering and
show that activation uncertainties can be propagated in a practical fashion
through the entire network, again with minor changes. Both probabilistic
networks retain the predictive power of the deterministic counterpart, but
yield uncertainties that correlate well with the empirical error induced by
their predictions. Moreover, the robustness to adversarial examples is
significantly increased.Comment: To appear at CVPR 201
- …