3,451 research outputs found
Paraphrase Generation with Deep Reinforcement Learning
Automatic generation of paraphrases from a given sentence is an important yet
challenging task in natural language processing (NLP), and plays a key role in
a number of applications such as question answering, search, and dialogue. In
this paper, we present a deep reinforcement learning approach to paraphrase
generation. Specifically, we propose a new framework for the task, which
consists of a \textit{generator} and an \textit{evaluator}, both of which are
learned from data. The generator, built as a sequence-to-sequence learning
model, can produce paraphrases given a sentence. The evaluator, constructed as
a deep matching model, can judge whether two sentences are paraphrases of each
other. The generator is first trained by deep learning and then further
fine-tuned by reinforcement learning in which the reward is given by the
evaluator. For the learning of the evaluator, we propose two methods based on
supervised learning and inverse reinforcement learning respectively, depending
on the type of available training data. Empirical study shows that the learned
evaluator can guide the generator to produce more accurate paraphrases.
Experimental results demonstrate the proposed models (the generators)
outperform the state-of-the-art methods in paraphrase generation in both
automatic evaluation and human evaluation.Comment: EMNLP 201
Deep Active Learning for Dialogue Generation
We propose an online, end-to-end, neural generative conversational model for
open-domain dialogue. It is trained using a unique combination of offline
two-phase supervised learning and online human-in-the-loop active learning.
While most existing research proposes offline supervision or hand-crafted
reward functions for online reinforcement, we devise a novel interactive
learning mechanism based on hamming-diverse beam search for response generation
and one-character user-feedback at each step. Experiments show that our model
inherently promotes the generation of semantically relevant and interesting
responses, and can be used to train agents with customized personas, moods and
conversational styles.Comment: Accepted at 6th Joint Conference on Lexical and Computational
Semantics (*SEM) 2017 (Previously titled "Online Sequence-to-Sequence Active
Learning for Open-Domain Dialogue Generation" on ArXiv
Neural Generative Question Answering
This paper presents an end-to-end neural network model, named Neural
Generative Question Answering (GENQA), that can generate answers to simple
factoid questions, based on the facts in a knowledge-base. More specifically,
the model is built on the encoder-decoder framework for sequence-to-sequence
learning, while equipped with the ability to enquire the knowledge-base, and is
trained on a corpus of question-answer pairs, with their associated triples in
the knowledge-base. Empirical study shows the proposed model can effectively
deal with the variations of questions and answers, and generate right and
natural answers by referring to the facts in the knowledge-base. The experiment
on question answering demonstrates that the proposed model can outperform an
embedding-based QA model as well as a neural dialogue model trained on the same
data.Comment: Accepted by IJCAI 201
Test Generation Algorithm Based on SVM with compressing Sample Space Methods
Test generation algorithm based on the SVM (support vector machine) generates test signals derived from the sample space of the output responses of the analog DUT. When the responses of the normal circuits are similar to those of the faulty circuits (i.e., the latter have only small parametric faults), the sample space is mixed and traditional algorithms have difficulty distinguishing the two groups. However, the SVM provides an effective result. The sample space contains redundant data, because successive impulse-response samples may get quite close. The redundancy will waste the needless computational load. So we propose three difference methods to compress the sample space. The compressing sample space methods are Equidistant compressional method, k-nearest neighbors method and maximal difference method. Numerical experiments prove that maximal difference method can ensure the precision of the test generation
- …