The explosive growth of rumors with text and images on social media platforms
has drawn great attention. Existing studies have made significant contributions
to cross-modal information interaction and fusion, but they fail to fully
explore hierarchical and complex semantic correlation across different modality
content, severely limiting their performance on detecting multi-modal rumor. In
this work, we propose a novel knowledge-enhanced hierarchical information
correlation learning approach (KhiCL) for multi-modal rumor detection by
jointly modeling the basic semantic correlation and high-order
knowledge-enhanced entity correlation. Specifically, KhiCL exploits cross-modal
joint dictionary to transfer the heterogeneous unimodality features into the
common feature space and captures the basic cross-modal semantic consistency
and inconsistency by a cross-modal fusion layer. Moreover, considering the
description of multi-modal content is narrated around entities, KhiCL extracts
visual and textual entities from images and text, and designs a knowledge
relevance reasoning strategy to find the shortest semantic relevant path
between each pair of entities in external knowledge graph, and absorbs all
complementary contextual knowledge of other connected entities in this path for
learning knowledge-enhanced entity representations. Furthermore, KhiCL utilizes
a signed attention mechanism to model the knowledge-enhanced entity consistency
and inconsistency of intra-modality and inter-modality entity pairs by
measuring their corresponding semantic relevant distance. Extensive experiments
have demonstrated the effectiveness of the proposed method