2,250 research outputs found
Dialogue Act Recognition via CRF-Attentive Structured Network
Dialogue Act Recognition (DAR) is a challenging problem in dialogue
interpretation, which aims to attach semantic labels to utterances and
characterize the speaker's intention. Currently, many existing approaches
formulate the DAR problem ranging from multi-classification to structured
prediction, which suffer from handcrafted feature extensions and attentive
contextual structural dependencies. In this paper, we consider the problem of
DAR from the viewpoint of extending richer Conditional Random Field (CRF)
structural dependencies without abandoning end-to-end training. We incorporate
hierarchical semantic inference with memory mechanism on the utterance
modeling. We then extend structured attention network to the linear-chain
conditional random field layer which takes into account both contextual
utterances and corresponding dialogue acts. The extensive experiments on two
major benchmark datasets Switchboard Dialogue Act (SWDA) and Meeting Recorder
Dialogue Act (MRDA) datasets show that our method achieves better performance
than other state-of-the-art solutions to the problem. It is a remarkable fact
that our method is nearly close to the human annotator's performance on SWDA
within 2% gap.Comment: 10 pages, 4figure
Conversational Analysis using Utterance-level Attention-based Bidirectional Recurrent Neural Networks
Recent approaches for dialogue act recognition have shown that context from
preceding utterances is important to classify the subsequent one. It was shown
that the performance improves rapidly when the context is taken into account.
We propose an utterance-level attention-based bidirectional recurrent neural
network (Utt-Att-BiRNN) model to analyze the importance of preceding utterances
to classify the current one. In our setup, the BiRNN is given the input set of
current and preceding utterances. Our model outperforms previous models that
use only preceding utterances as context on the used corpus. Another
contribution of the article is to discover the amount of information in each
utterance to classify the subsequent one and to show that context-based
learning not only improves the performance but also achieves higher confidence
in the classification. We use character- and word-level features to represent
the utterances. The results are presented for character and word feature
representations and as an ensemble model of both representations. We found that
when classifying short utterances, the closest preceding utterances contributes
to a higher degree.Comment: Proceedings of INTERSPEECH 201
- …