Incorporating conversational context and knowledge into dialogue generation
models has been essential for improving the quality of the generated responses.
The context, comprising utterances from previous dialogue exchanges, is used as
a source of content for response generation and as a means of selecting
external knowledge. However, to avoid introducing irrelevant content, it is key
to enable fine-grained scoring of context and knowledge. In this paper, we
present a novel approach to context and knowledge weighting as an integral part
of model training. We guide the model training through a Contextual Knowledge
Learning (CKL) process which involves Latent Vectors for context and knowledge,
respectively. CKL Latent Vectors capture the relationship between context,
knowledge, and responses through weak supervision and enable differential
weighting of context utterances and knowledge sentences during the training
process. Experiments with two standard datasets and human evaluation
demonstrate that CKL leads to a significant improvement compared with the
performance of six strong baseline models and shows robustness with regard to
reduced sizes of training sets.Comment: 9 pages, 4 figures, 6 tables. Accepted as a full paper in the main
conference by ACL 202