1,990 research outputs found
Adversarial Multi-task Learning for Text Classification
Neural network models have shown their promising opportunities for multi-task
learning, which focus on learning the shared layers to extract the common and
task-invariant features. However, in most existing approaches, the extracted
shared features are prone to be contaminated by task-specific features or the
noise brought by other tasks. In this paper, we propose an adversarial
multi-task learning framework, alleviating the shared and private latent
feature spaces from interfering with each other. We conduct extensive
experiments on 16 different text classification tasks, which demonstrates the
benefits of our approach. Besides, we show that the shared knowledge learned by
our proposed model can be regarded as off-the-shelf knowledge and easily
transferred to new tasks. The datasets of all 16 tasks are publicly available
at \url{http://nlp.fudan.edu.cn/data/}Comment: Accepted by ACL201
Incorporating Discriminator in Sentence Generation: a Gibbs Sampling Method
Generating plausible and fluent sentence with desired properties has long
been a challenge. Most of the recent works use recurrent neural networks (RNNs)
and their variants to predict following words given previous sequence and
target label. In this paper, we propose a novel framework to generate
constrained sentences via Gibbs Sampling. The candidate sentences are revised
and updated iteratively, with sampled new words replacing old ones. Our
experiments show the effectiveness of the proposed method to generate plausible
and diverse sentences.Comment: published in The Thirty-Second AAAI Conference on Artificial
Intelligence (AAAI-18), 201
- …
