436 research outputs found
Improving Matching Models with Hierarchical Contextualized Representations for Multi-turn Response Selection
In this paper, we study context-response matching with pre-trained
contextualized representations for multi-turn response selection in
retrieval-based chatbots. Existing models, such as Cove and ELMo, are trained
with limited context (often a single sentence or paragraph), and may not work
well on multi-turn conversations, due to the hierarchical nature, informal
language, and domain-specific words. To address the challenges, we propose
pre-training hierarchical contextualized representations, including contextual
word-level and sentence-level representations, by learning a dialogue
generation model from large-scale conversations with a hierarchical
encoder-decoder architecture. Then the two levels of representations are
blended into the input and output layer of a matching model respectively.
Experimental results on two benchmark conversation datasets indicate that the
proposed hierarchical contextualized representations can bring significantly
and consistently improvement to existing matching models for response
selection.Comment: 6 pages, 1 figur
- …