25 research outputs found
Semisupervised Autoencoder for Sentiment Analysis
In this paper, we investigate the usage of autoencoders in modeling textual
data. Traditional autoencoders suffer from at least two aspects: scalability
with the high dimensionality of vocabulary size and dealing with
task-irrelevant words. We address this problem by introducing supervision via
the loss function of autoencoders. In particular, we first train a linear
classifier on the labeled data, then define a loss for the autoencoder with the
weights learned from the linear classifier. To reduce the bias brought by one
single classifier, we define a posterior probability distribution on the
weights of the classifier, and derive the marginalized loss of the autoencoder
with Laplace approximation. We show that our choice of loss function can be
rationalized from the perspective of Bregman Divergence, which justifies the
soundness of our model. We evaluate the effectiveness of our model on six
sentiment analysis datasets, and show that our model significantly outperforms
all the competing methods with respect to classification accuracy. We also show
that our model is able to take advantage of unlabeled dataset and get improved
performance. We further show that our model successfully learns highly
discriminative feature maps, which explains its superior performance.Comment: To appear in AAAI 201
Matryoshka Diffusion Models
Diffusion models are the de facto approach for generating high-quality images
and videos, but learning high-dimensional models remains a formidable task due
to computational and optimization challenges. Existing methods often resort to
training cascaded models in pixel space or using a downsampled latent space of
a separately trained auto-encoder. In this paper, we introduce Matryoshka
Diffusion Models(MDM), an end-to-end framework for high-resolution image and
video synthesis. We propose a diffusion process that denoises inputs at
multiple resolutions jointly and uses a NestedUNet architecture where features
and parameters for small-scale inputs are nested within those of large scales.
In addition, MDM enables a progressive training schedule from lower to higher
resolutions, which leads to significant improvements in optimization for
high-resolution generation. We demonstrate the effectiveness of our approach on
various benchmarks, including class-conditioned image generation,
high-resolution text-to-image, and text-to-video applications. Remarkably, we
can train a single pixel-space model at resolutions of up to 1024x1024 pixels,
demonstrating strong zero-shot generalization using the CC12M dataset, which
contains only 12 million images.Comment: 28 pages, 18 figure
BOOT: Data-free Distillation of Denoising Diffusion Models with Bootstrapping
Diffusion models have demonstrated excellent potential for generating diverse
images. However, their performance often suffers from slow generation due to
iterative denoising. Knowledge distillation has been recently proposed as a
remedy that can reduce the number of inference steps to one or a few without
significant quality degradation. However, existing distillation methods either
require significant amounts of offline computation for generating synthetic
training data from the teacher model or need to perform expensive online
learning with the help of real data. In this work, we present a novel technique
called BOOT, that overcomes these limitations with an efficient data-free
distillation algorithm. The core idea is to learn a time-conditioned model that
predicts the output of a pre-trained diffusion model teacher given any time
step. Such a model can be efficiently trained based on bootstrapping from two
consecutive sampled steps. Furthermore, our method can be easily adapted to
large-scale text-to-image diffusion models, which are challenging for
conventional methods given the fact that the training sets are often large and
difficult to access. We demonstrate the effectiveness of our approach on
several benchmark datasets in the DDIM setting, achieving comparable generation
quality while being orders of magnitude faster than the diffusion teacher. The
text-to-image results show that the proposed approach is able to handle highly
complex distributions, shedding light on more efficient generative modeling.Comment: In progres