7,202 research outputs found
Relaxations for inference in restricted Boltzmann machines
We propose a relaxation-based approximate inference algorithm that samples
near-MAP configurations of a binary pairwise Markov random field. We experiment
on MAP inference tasks in several restricted Boltzmann machines. We also use
our underlying sampler to estimate the log-partition function of restricted
Boltzmann machines and compare against other sampling-based methods.Comment: ICLR 2014 workshop track submissio
Inferring Sparsity: Compressed Sensing using Generalized Restricted Boltzmann Machines
In this work, we consider compressed sensing reconstruction from
measurements of -sparse structured signals which do not possess a writable
correlation model. Assuming that a generative statistical model, such as a
Boltzmann machine, can be trained in an unsupervised manner on example signals,
we demonstrate how this signal model can be used within a Bayesian framework of
signal reconstruction. By deriving a message-passing inference for general
distribution restricted Boltzmann machines, we are able to integrate these
inferred signal models into approximate message passing for compressed sensing
reconstruction. Finally, we show for the MNIST dataset that this approach can
be very effective, even for .Comment: IEEE Information Theory Workshop, 201
Monotone deep Boltzmann machines
Deep Boltzmann machines (DBMs), one of the first ``deep'' learning methods
ever studied, are multi-layered probabilistic models governed by a pairwise
energy function that describes the likelihood of all variables/nodes in the
network. In practice, DBMs are often constrained, i.e., via the
\emph{restricted} Boltzmann machine (RBM) architecture (which does not permit
intra-layer connections), in order to allow for more efficient inference. In
this work, we revisit the generic DBM approach, and ask the question: are there
other possible restrictions to their design that would enable efficient
(approximate) inference? In particular, we develop a new class of restricted
model, the monotone DBM, which allows for arbitrary self-connection in each
layer, but restricts the \emph{weights} in a manner that guarantees the
existence and global uniqueness of a mean-field fixed point. To do this, we
leverage tools from the recently-proposed monotone Deep Equilibrium model and
show that a particular choice of activation results in a fixed-point iteration
that gives a variational mean-field solution. While this approach is still
largely conceptual, it is the first architecture that allows for efficient
approximate inference in fully-general weight structures for DBMs. We apply
this approach to simple deep convolutional Boltzmann architectures and
demonstrate that it allows for tasks such as the joint completion and
classification of images, within a single deep probabilistic setting, while
avoiding the pitfalls of mean-field inference in traditional RBMs
Learning Generative Models with Visual Attention
Attention has long been proposed by psychologists as important for
effectively dealing with the enormous sensory stimulus available in the
neocortex. Inspired by the visual attention models in computational
neuroscience and the need of object-centric data for generative models, we
describe for generative learning framework using attentional mechanisms.
Attentional mechanisms can propagate signals from region of interest in a scene
to an aligned canonical representation, where generative modeling takes place.
By ignoring background clutter, generative models can concentrate their
resources on the object of interest. Our model is a proper graphical model
where the 2D Similarity transformation is a part of the top-down process. A
ConvNet is employed to provide good initializations during posterior inference
which is based on Hamiltonian Monte Carlo. Upon learning images of faces, our
model can robustly attend to face regions of novel test subjects. More
importantly, our model can learn generative models of new faces from a novel
dataset of large images where the face locations are not known.Comment: In the proceedings of Neural Information Processing Systems, 201
Modeling Documents with Deep Boltzmann Machines
We introduce a Deep Boltzmann Machine model suitable for modeling and
extracting latent semantic representations from a large unstructured collection
of documents. We overcome the apparent difficulty of training a DBM with
judicious parameter tying. This parameter tying enables an efficient
pretraining algorithm and a state initialization scheme that aids inference.
The model can be trained just as efficiently as a standard Restricted Boltzmann
Machine. Our experiments show that the model assigns better log probability to
unseen data than the Replicated Softmax model. Features extracted from our
model outperform LDA, Replicated Softmax, and DocNADE models on document
retrieval and document classification tasks.Comment: Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013
- …