Rumor spreaders are increasingly utilizing multimedia content to attract the
attention and trust of news consumers. Though quite a few rumor detection
models have exploited the multi-modal data, they seldom consider the
inconsistent semantics between images and texts, and rarely spot the
inconsistency among the post contents and background knowledge. In addition,
they commonly assume the completeness of multiple modalities and thus are
incapable of handling handle missing modalities in real-life scenarios.
Motivated by the intuition that rumors in social media are more likely to have
inconsistent semantics, a novel Knowledge-guided Dual-consistency Network is
proposed to detect rumors with multimedia contents. It uses two consistency
detection subnetworks to capture the inconsistency at the cross-modal level and
the content-knowledge level simultaneously. It also enables robust multi-modal
representation learning under different missing visual modality conditions,
using a special token to discriminate between posts with visual modality and
posts without visual modality. Extensive experiments on three public real-world
multimedia datasets demonstrate that our framework can outperform the
state-of-the-art baselines under both complete and incomplete modality
conditions. Our codes are available at https://github.com/MengzSun/KDCN