4 research outputs found
Energy-based Self-attentive Learning of Abstractive Communities for Spoken Language Understanding
Abstractive community detection is an important spoken language understanding
task, whose goal is to group utterances in a conversation according to whether
they can be jointly summarized by a common abstractive sentence. This paper
provides a novel approach to this task. We first introduce a neural contextual
utterance encoder featuring three types of self-attention mechanisms. We then
train it using the siamese and triplet energy-based meta-architectures.
Experiments on the AMI corpus show that our system outperforms multiple
energy-based and non-energy based baselines from the state-of-the-art. Code and
data are publicly available.Comment: Update baseline
Topic-Oriented Spoken Dialogue Summarization for Customer Service with Saliency-Aware Topic Modeling
In a customer service system, dialogue summarization can boost service
efficiency by automatically creating summaries for long spoken dialogues in
which customers and agents try to address issues about specific topics. In this
work, we focus on topic-oriented dialogue summarization, which generates highly
abstractive summaries that preserve the main ideas from dialogues. In spoken
dialogues, abundant dialogue noise and common semantics could obscure the
underlying informative content, making the general topic modeling approaches
difficult to apply. In addition, for customer service, role-specific
information matters and is an indispensable part of a summary. To effectively
perform topic modeling on dialogues and capture multi-role information, in this
work we propose a novel topic-augmented two-stage dialogue summarizer (TDS)
jointly with a saliency-aware neural topic model (SATM) for topic-oriented
summarization of customer service dialogues. Comprehensive studies on a
real-world Chinese customer service dataset demonstrated the superiority of our
method against several strong baselines.Comment: Accepted by AAAI 2021, 9 page