12,281 research outputs found
Language as a Latent Variable: Discrete Generative Models for Sentence Compression
In this work we explore deep generative models of text in which the latent
representation of a document is itself drawn from a discrete language model
distribution. We formulate a variational auto-encoder for inference in this
model and apply it to the task of compressing sentences. In this application
the generative model first draws a latent summary sentence from a background
language model, and then subsequently draws the observed sentence conditioned
on this latent summary. In our empirical evaluation we show that generative
formulations of both abstractive and extractive compression yield
state-of-the-art results when trained on a large amount of supervised data.
Further, we explore semi-supervised compression scenarios where we show that it
is possible to achieve performance competitive with previously proposed
supervised models while training on a fraction of the supervised data.Comment: EMNLP 201
Multi-space Variational Encoder-Decoders for Semi-supervised Labeled Sequence Transduction
Labeled sequence transduction is a task of transforming one sequence into
another sequence that satisfies desiderata specified by a set of labels. In
this paper we propose multi-space variational encoder-decoders, a new model for
labeled sequence transduction with semi-supervised learning. The generative
model can use neural networks to handle both discrete and continuous latent
variables to exploit various features of data. Experiments show that our model
provides not only a powerful supervised framework but also can effectively take
advantage of the unlabeled data. On the SIGMORPHON morphological inflection
benchmark, our model outperforms single-model state-of-art results by a large
margin for the majority of languages.Comment: Accepted by ACL 201
Hierarchical Quantized Representations for Script Generation
Scripts define knowledge about how everyday scenarios (such as going to a
restaurant) are expected to unfold. One of the challenges to learning scripts
is the hierarchical nature of the knowledge. For example, a suspect arrested
might plead innocent or guilty, and a very different track of events is then
expected to happen. To capture this type of information, we propose an
autoencoder model with a latent space defined by a hierarchy of categorical
variables. We utilize a recently proposed vector quantization based approach,
which allows continuous embeddings to be associated with each latent variable
value. This permits the decoder to softly decide what portions of the latent
hierarchy to condition on by attending over the value embeddings for a given
setting. Our model effectively encodes and generates scripts, outperforming a
recent language modeling-based method on several standard tasks, and allowing
the autoencoder model to achieve substantially lower perplexity scores compared
to the previous language modeling-based method.Comment: EMNLP 201
SALSA-TEXT : self attentive latent space based adversarial text generation
Inspired by the success of self attention mechanism and Transformer
architecture in sequence transduction and image generation applications, we
propose novel self attention-based architectures to improve the performance of
adversarial latent code- based schemes in text generation. Adversarial latent
code-based text generation has recently gained a lot of attention due to their
promising results. In this paper, we take a step to fortify the architectures
used in these setups, specifically AAE and ARAE. We benchmark two latent
code-based methods (AAE and ARAE) designed based on adversarial setups. In our
experiments, the Google sentence compression dataset is utilized to compare our
method with these methods using various objective and subjective measures. The
experiments demonstrate the proposed (self) attention-based models outperform
the state-of-the-art in adversarial code-based text generation.Comment: 10 pages, 3 figures, under review at ICLR 201
- …