72,594 research outputs found
Revision in Continuous Space: Unsupervised Text Style Transfer without Adversarial Learning
Typical methods for unsupervised text style transfer often rely on two key
ingredients: 1) seeking the explicit disentanglement of the content and the
attributes, and 2) troublesome adversarial learning. In this paper, we show
that neither of these components is indispensable. We propose a new framework
that utilizes the gradients to revise the sentence in a continuous space during
inference to achieve text style transfer. Our method consists of three key
components: a variational auto-encoder (VAE), some attribute predictors (one
for each attribute), and a content predictor. The VAE and the two types of
predictors enable us to perform gradient-based optimization in the continuous
space, which is mapped from sentences in a discrete space, to find the
representation of a target sentence with the desired attributes and preserved
content. Moreover, the proposed method naturally has the ability to
simultaneously manipulate multiple fine-grained attributes, such as sentence
length and the presence of specific words, when performing text style transfer
tasks. Compared with previous adversarial learning based methods, the proposed
method is more interpretable, controllable and easier to train. Extensive
experimental studies on three popular text style transfer tasks show that the
proposed method significantly outperforms five state-of-the-art methods.Comment: Association for the Advancement of Artificial Intelligence. AAAI 202
SALSA-TEXT : self attentive latent space based adversarial text generation
Inspired by the success of self attention mechanism and Transformer
architecture in sequence transduction and image generation applications, we
propose novel self attention-based architectures to improve the performance of
adversarial latent code- based schemes in text generation. Adversarial latent
code-based text generation has recently gained a lot of attention due to their
promising results. In this paper, we take a step to fortify the architectures
used in these setups, specifically AAE and ARAE. We benchmark two latent
code-based methods (AAE and ARAE) designed based on adversarial setups. In our
experiments, the Google sentence compression dataset is utilized to compare our
method with these methods using various objective and subjective measures. The
experiments demonstrate the proposed (self) attention-based models outperform
the state-of-the-art in adversarial code-based text generation.Comment: 10 pages, 3 figures, under review at ICLR 201
- …