274 research outputs found

    Paraphrase Generation with Deep Reinforcement Learning

    Full text link
    Automatic generation of paraphrases from a given sentence is an important yet challenging task in natural language processing (NLP), and plays a key role in a number of applications such as question answering, search, and dialogue. In this paper, we present a deep reinforcement learning approach to paraphrase generation. Specifically, we propose a new framework for the task, which consists of a \textit{generator} and an \textit{evaluator}, both of which are learned from data. The generator, built as a sequence-to-sequence learning model, can produce paraphrases given a sentence. The evaluator, constructed as a deep matching model, can judge whether two sentences are paraphrases of each other. The generator is first trained by deep learning and then further fine-tuned by reinforcement learning in which the reward is given by the evaluator. For the learning of the evaluator, we propose two methods based on supervised learning and inverse reinforcement learning respectively, depending on the type of available training data. Empirical study shows that the learned evaluator can guide the generator to produce more accurate paraphrases. Experimental results demonstrate the proposed models (the generators) outperform the state-of-the-art methods in paraphrase generation in both automatic evaluation and human evaluation.Comment: EMNLP 201

    Cleveree: an artificially intelligent web service for Jacob voice chatbot

    Get PDF
    Jacob is a voice chatbot that use Wit.ai to get the context of the question and give an answer based on that context. However, Jacob has no variation in answer and could not recognize the context well if it has not been learned previously by the Wit.ai. Thus, this paper proposes two features of artificial intelligence (AI) built as a web service: the paraphrase of answers using the Stacked Residual LSTM model and the question summarization using Cosine Similarity with pre-trained Word2Vec and TextRank algorithm. These two features are novel designs that are tailored to Jacob, this AI module is called Cleveree. The evaluation of Cleveree is carried out using the technology acceptance model (TAM) method and interview with Jacob admins. The results show that 79.17% of respondents strongly agree that both features are useful and 72.57% of respondents strongly agree that both features are easy to use

    Sequence to Sequence Learning for Query Expansion

    Full text link
    Using sequence to sequence algorithms for query expansion has not been explored yet in Information Retrieval literature nor in Question-Answering's. We tried to fill this gap in the literature with a custom Query Expansion engine trained and tested on open datasets. Starting from open datasets, we built a Query Expansion training set using sentence-embeddings-based Keyword Extraction. We therefore assessed the ability of the Sequence to Sequence neural networks to capture expanding relations in the words embeddings' space.Comment: 8 pages, 2 figures, AAAI-19 Student Abstract and Poster Progra
    • …
    corecore