2 research outputs found
Massive Styles Transfer with Limited Labeled Data
Language style transfer has attracted more and more attention in the past few
years. Recent researches focus on improving neural models targeting at
transferring from one style to the other with labeled data. However,
transferring across multiple styles is often very useful in real-life
applications. Previous researches of language style transfer have two main
deficiencies: dependency on massive labeled data and neglect of mutual
influence among different style transfer tasks. In this paper, we propose a
multi-agent style transfer system (MAST) for addressing multiple style transfer
tasks with limited labeled data, by leveraging abundant unlabeled data and the
mutual benefit among the multiple styles. A style transfer agent in our system
not only learns from unlabeled data by using techniques like denoising
auto-encoder and back-translation, but also learns to cooperate with other
style transfer agents in a self-organization manner. We conduct our experiments
by simulating a set of real-world style transfer tasks with multiple versions
of the Bible. Our model significantly outperforms the other competitive
methods. Extensive results and analysis further verify the efficacy of our
proposed system
A Semi-Supervised Approach for Low-Resourced Text Generation
Recently, encoder-decoder neural models have achieved great success on text
generation tasks. However, one problem of this kind of models is that their
performances are usually limited by the scale of well-labeled data, which are
very expensive to get. The low-resource (of labeled data) problem is quite
common in different task generation tasks, but unlabeled data are usually
abundant. In this paper, we propose a method to make use of the unlabeled data
to improve the performance of such models in the low-resourced circumstances.
We use denoising auto-encoder (DAE) and language model (LM) based reinforcement
learning (RL) to enhance the training of encoder and decoder with unlabeled
data. Our method shows adaptability for different text generation tasks, and
makes significant improvements over basic text generation models.Comment: Finished in 2017, a foundation work for "Massive Styles Transfer with
Limited Labeled Data