1 research outputs found
Local Contextual Attention with Hierarchical Structure for Dialogue Act Recognition
Dialogue act recognition is a fundamental task for an intelligent dialogue
system. Previous work models the whole dialog to predict dialog acts, which may
bring the noise from unrelated sentences. In this work, we design a
hierarchical model based on self-attention to capture intra-sentence and
inter-sentence information. We revise the attention distribution to focus on
the local and contextual semantic information by incorporating the relative
position information between utterances. Based on the found that the length of
dialog affects the performance, we introduce a new dialog segmentation
mechanism to analyze the effect of dialog length and context padding length
under online and offline settings. The experiment shows that our method
achieves promising performance on two datasets: Switchboard Dialogue Act and
DailyDialog with the accuracy of 80.34\% and 85.81\% respectively.
Visualization of the attention weights shows that our method can learn the
context dependency between utterances explicitly