Despite recent progress in improving the performance of misinformation
detection systems, classifying misinformation in an unseen domain remains an
elusive challenge. To address this issue, a common approach is to introduce a
domain critic and encourage domain-invariant input features. However, early
misinformation often demonstrates both conditional and label shifts against
existing misinformation data (e.g., class imbalance in COVID-19 datasets),
rendering such methods less effective for detecting early misinformation. In
this paper, we propose contrastive adaptation network for early misinformation
detection (CANMD). Specifically, we leverage pseudo labeling to generate
high-confidence target examples for joint training with source data. We
additionally design a label correction component to estimate and correct the
label shifts (i.e., class priors) between the source and target domains.
Moreover, a contrastive adaptation loss is integrated in the objective function
to reduce the intra-class discrepancy and enlarge the inter-class discrepancy.
As such, the adapted model learns corrected class priors and an invariant
conditional distribution across both domains for improved estimation of the
target data distribution. To demonstrate the effectiveness of the proposed
CANMD, we study the case of COVID-19 early misinformation detection and perform
extensive experiments using multiple real-world datasets. The results suggest
that CANMD can effectively adapt misinformation detection systems to the unseen
COVID-19 target domain with significant improvements compared to the
state-of-the-art baselines.Comment: Accepted to CIKM 202