45,861 research outputs found
Alternating Back-Propagation for Generator Network
This paper proposes an alternating back-propagation algorithm for learning
the generator network model. The model is a non-linear generalization of factor
analysis. In this model, the mapping from the continuous latent factors to the
observed signal is parametrized by a convolutional neural network. The
alternating back-propagation algorithm iterates the following two steps: (1)
Inferential back-propagation, which infers the latent factors by Langevin
dynamics or gradient descent. (2) Learning back-propagation, which updates the
parameters given the inferred latent factors by gradient descent. The gradient
computations in both steps are powered by back-propagation, and they share most
of their code in common. We show that the alternating back-propagation
algorithm can learn realistic generator models of natural images, video
sequences, and sounds. Moreover, it can also be used to learn from incomplete
or indirect training data
Learning Generative ConvNets via Multi-grid Modeling and Sampling
This paper proposes a multi-grid method for learning energy-based generative
ConvNet models of images. For each grid, we learn an energy-based probabilistic
model where the energy function is defined by a bottom-up convolutional neural
network (ConvNet or CNN). Learning such a model requires generating synthesized
examples from the model. Within each iteration of our learning algorithm, for
each observed training image, we generate synthesized images at multiple grids
by initializing the finite-step MCMC sampling from a minimal 1 x 1 version of
the training image. The synthesized image at each subsequent grid is obtained
by a finite-step MCMC initialized from the synthesized image generated at the
previous coarser grid. After obtaining the synthesized examples, the parameters
of the models at multiple grids are updated separately and simultaneously based
on the differences between synthesized and observed examples. We show that this
multi-grid method can learn realistic energy-based generative ConvNet models,
and it outperforms the original contrastive divergence (CD) and persistent CD.Comment: CVPR 201
- …