Skip Act Vectors: integrating dialogue context into sentence embeddings

Abstract

International audienceThis paper compares several approaches for computing dialogue turn embeddings and evaluate their representation capacities in two dialogue act related tasks within a hierarchical Recurrent Neural Network architecture. These turn em-beddings can be produced explicitely or implicitely by extracting the hidden layer of a model trained for a given task. We introduce skip-act, a new dialogue turn em-beddings approach, which are extracted as the common representation layer from a multi-task model that predicts both the previous and the next dialogue act. The models used to learn turn embeddings are trained on a large dialogue corpus with light supervision, while the models used to predict dialog acts using turn embeddings are trained on a sub-corpus with gold dialogue act annotations. We compare their performances for predicting the current dialogue act as well as their ability to predict the next dialogue act, which is a more challenging task that can have several applica-tive impacts. With a better context representation, the skip-act turn embeddings are shown to outperform previous approaches both in terms of overall F-measure and in terms of macro-F1, showing regular improvements on the various dialogue acts

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 23/05/2019