4 research outputs found

    Topic Break Detection in Interview Dialogues Using Sentence Embedding of Utterance and Speech Intention Based on Multitask Neural Networks

    Get PDF
    Currently, task-oriented dialogue systems that perform specific tasks based on dialogue are widely used. Moreover, research and development of non-task-oriented dialogue systems are also actively conducted. One of the problems with these systems is that it is difficult to switch topics naturally. In this study, we focus on interview dialogue systems. In an interview dialogue, the dialogue system can take the initiative as an interviewer. The main task of an interview dialogue system is to obtain information about the interviewee via dialogue and to assist this individual in understanding his or her personality and strengths. In order to accomplish this task, the system needs to be flexible and appropriate for detecting topic switching and topic breaks. Given that topic switching tends to be more ambiguous in interview dialogues than in task-oriented dialogues, existing topic modeling methods that determine topic breaks based only on relationships and similarities between words are likely to fail. In this study, we propose a method for detecting topic breaks in dialogue to achieve flexible topic switching in interview dialogue systems. The proposed method is based on multi-task learning neural network that uses embedded representations of sentences to understand the context of the text and utilizes the intention of an utterance as a feature. In multi-task learning, not only topic breaks but also the intention associated with the utterance and the speaker are targets of prediction. The results of our evaluation experiments show that using utterance intentions as features improves the accuracy of topic separation estimation compared to the baseline model

    Automatic evaluation of open-domain dialogue systems using automatically augmented references

    Get PDF
    学位の種別: 修士University of Tokyo(東京大学

    Dialog Response Generation Using Adversarially Learned Latent Bag-of-Words

    Get PDF
    Dialog response generation is the task of generating response utterance given a query utterance. Apart from generating relevant and coherent responses, one would like the dialog generation model to generate diverse and informative sentences. In this work, we propose and explore a novel multi-stage dialog response generation approach. In the first stage of our proposed multi-stage approach, we construct a variational latent space on the bag-of-words representation of the query and response utterances. In the second stage, transformation from query latent code to response latent code is learned using an adversarial process. The final stage involves fine-tuning a pretrained transformer based model called text-to-text transfer (T5) (Raffel et al., 2019) using a novel training regimen to generate the response utterances by conditioning on the query utterance and the response word learned in the previous stage. We evaluate our proposed approach on two popular dialog datasets. Our proposed approach outperforms the baseline transformer model on multiple quantitative metrics including overlap metric (Bleu), diversity metrics (distinct-1 and distinct-2), and fluency metric (perplexity)
    corecore