In order to build self-consistent personalized dialogue agents, previous
research has mostly focused on textual persona that delivers personal facts or
personalities. However, to fully describe the multi-faceted nature of persona,
image modality can help better reveal the speaker's personal characteristics
and experiences in episodic memory (Rubin et al., 2003; Conway, 2009). In this
work, we extend persona-based dialogue to the multimodal domain and make two
main contributions. First, we present the first multimodal persona-based
dialogue dataset named MPCHAT, which extends persona with both text and images
to contain episodic memories. Second, we empirically show that incorporating
multimodal persona, as measured by three proposed multimodal persona-grounded
dialogue tasks (i.e., next response prediction, grounding persona prediction,
and speaker identification), leads to statistically significant performance
improvements across all tasks. Thus, our work highlights that multimodal
persona is crucial for improving multimodal dialogue comprehension, and our
MPCHAT serves as a high-quality resource for this research.Comment: Accepted at ACL 202