4,658 research outputs found
Alternating Back-Propagation for Generator Network
This paper proposes an alternating back-propagation algorithm for learning
the generator network model. The model is a non-linear generalization of factor
analysis. In this model, the mapping from the continuous latent factors to the
observed signal is parametrized by a convolutional neural network. The
alternating back-propagation algorithm iterates the following two steps: (1)
Inferential back-propagation, which infers the latent factors by Langevin
dynamics or gradient descent. (2) Learning back-propagation, which updates the
parameters given the inferred latent factors by gradient descent. The gradient
computations in both steps are powered by back-propagation, and they share most
of their code in common. We show that the alternating back-propagation
algorithm can learn realistic generator models of natural images, video
sequences, and sounds. Moreover, it can also be used to learn from incomplete
or indirect training data
Learning Generative ConvNets via Multi-grid Modeling and Sampling
This paper proposes a multi-grid method for learning energy-based generative
ConvNet models of images. For each grid, we learn an energy-based probabilistic
model where the energy function is defined by a bottom-up convolutional neural
network (ConvNet or CNN). Learning such a model requires generating synthesized
examples from the model. Within each iteration of our learning algorithm, for
each observed training image, we generate synthesized images at multiple grids
by initializing the finite-step MCMC sampling from a minimal 1 x 1 version of
the training image. The synthesized image at each subsequent grid is obtained
by a finite-step MCMC initialized from the synthesized image generated at the
previous coarser grid. After obtaining the synthesized examples, the parameters
of the models at multiple grids are updated separately and simultaneously based
on the differences between synthesized and observed examples. We show that this
multi-grid method can learn realistic energy-based generative ConvNet models,
and it outperforms the original contrastive divergence (CD) and persistent CD.Comment: CVPR 201
Estimating a Treatment Effect with Repeated Measurements Accounting for Varying Effectiveness Duration
To assess treatment efficacy in clinical trials, certain clinical outcomes are repeatedly measured for same subject over time. They can be regarded as function of time. The difference in their mean functions between the treatment arms usually characterises a treatment effect. Due to the potential existence of subject-specific treatment effectiveness lag and saturation times, erosion of treatment effect in the difference may occur during the observation period of time. Instead of using ad hoc parametric or purely nonparametric time-varying coefficients in statistical modeling, we first propose to model the treatment effectiveness durations, which are the varying time intervals between the lag and saturation times. Then some mean response models are used to include such treatment effectiveness durations. Our methodologies are demonstrated by simulations and an application to the dataset of a landmark HIV/AIDS clinical trial of short-course nevirapine against mother-to-child HIV vertical transmission during labour and delivery
- …