1,070 research outputs found
Revision in Continuous Space: Unsupervised Text Style Transfer without Adversarial Learning
Typical methods for unsupervised text style transfer often rely on two key
ingredients: 1) seeking the explicit disentanglement of the content and the
attributes, and 2) troublesome adversarial learning. In this paper, we show
that neither of these components is indispensable. We propose a new framework
that utilizes the gradients to revise the sentence in a continuous space during
inference to achieve text style transfer. Our method consists of three key
components: a variational auto-encoder (VAE), some attribute predictors (one
for each attribute), and a content predictor. The VAE and the two types of
predictors enable us to perform gradient-based optimization in the continuous
space, which is mapped from sentences in a discrete space, to find the
representation of a target sentence with the desired attributes and preserved
content. Moreover, the proposed method naturally has the ability to
simultaneously manipulate multiple fine-grained attributes, such as sentence
length and the presence of specific words, when performing text style transfer
tasks. Compared with previous adversarial learning based methods, the proposed
method is more interpretable, controllable and easier to train. Extensive
experimental studies on three popular text style transfer tasks show that the
proposed method significantly outperforms five state-of-the-art methods.Comment: Association for the Advancement of Artificial Intelligence. AAAI 202
Adversarial Variational Embedding for Robust Semi-supervised Learning
Semi-supervised learning is sought for leveraging the unlabelled data when
labelled data is difficult or expensive to acquire. Deep generative models
(e.g., Variational Autoencoder (VAE)) and semisupervised Generative Adversarial
Networks (GANs) have recently shown promising performance in semi-supervised
classification for the excellent discriminative representing ability. However,
the latent code learned by the traditional VAE is not exclusive (repeatable)
for a specific input sample, which prevents it from excellent classification
performance. In particular, the learned latent representation depends on a
non-exclusive component which is stochastically sampled from the prior
distribution. Moreover, the semi-supervised GAN models generate data from
pre-defined distribution (e.g., Gaussian noises) which is independent of the
input data distribution and may obstruct the convergence and is difficult to
control the distribution of the generated data. To address the aforementioned
issues, we propose a novel Adversarial Variational Embedding (AVAE) framework
for robust and effective semi-supervised learning to leverage both the
advantage of GAN as a high quality generative model and VAE as a posterior
distribution learner. The proposed approach first produces an exclusive latent
code by the model which we call VAE++, and meanwhile, provides a meaningful
prior distribution for the generator of GAN. The proposed approach is evaluated
over four different real-world applications and we show that our method
outperforms the state-of-the-art models, which confirms that the combination of
VAE++ and GAN can provide significant improvements in semisupervised
classification.Comment: 9 pages, Accepted by Research Track in KDD 201
VR-GNN: Variational Relation Vector Graph Neural Network for Modeling both Homophily and Heterophily
Graph Neural Networks (GNNs) have achieved remarkable success in diverse
real-world applications. Traditional GNNs are designed based on homophily,
which leads to poor performance under heterophily scenarios. Current solutions
deal with heterophily mainly by mixing high-order neighbors or passing signed
messages. However, mixing high-order neighbors destroys the original graph
structure and passing signed messages utilizes an inflexible message-passing
mechanism, which is prone to producing unsatisfactory effects. To overcome the
above problems, we propose a novel GNN model based on relation vector
translation named Variational Relation Vector Graph Neural Network (VR-GNN).
VR-GNN models relation generation and graph aggregation into an end-to-end
model based on Variational Auto-Encoder. The encoder utilizes the structure,
feature and label to generate a proper relation vector. The decoder achieves
superior node representation by incorporating the relation translation into the
message-passing framework. VR-GNN can fully capture the homophily and
heterophily between nodes due to the great flexibility of relation translation
in modeling neighbor relationships. We conduct extensive experiments on eight
real-world datasets with different homophily-heterophily properties to verify
the effectiveness of our model. The experimental results show that VR-GNN gains
consistent and significant improvements against state-of-the-art GNN methods
under heterophily, and competitive performance under homophily
- …