Fake news detection aims to detect fake news widely spreading on social media
platforms, which can negatively influence the public and the government. Many
approaches have been developed to exploit relevant information from news
images, text, or videos. However, these methods may suffer from the following
limitations: (1) ignore the inherent emotional information of the news, which
could be beneficial since it contains the subjective intentions of the authors;
(2) pay little attention to the relation (similarity) between the title and
textual information in news articles, which often use irrelevant title to
attract reader' attention. To this end, we propose a novel Title-Text
similarity and emotion-aware Fake news detection (TieFake) method by jointly
modeling the multi-modal context information and the author sentiment in a
unified framework. Specifically, we respectively employ BERT and ResNeSt to
learn the representations for text and images, and utilize publisher emotion
extractor to capture the author's subjective emotion in the news content. We
also propose a scale-dot product attention mechanism to capture the similarity
between title features and textual features. Experiments are conducted on two
publicly available multi-modal datasets, and the results demonstrate that our
proposed method can significantly improve the performance of fake news
detection. Our code is available at https://github.com/UESTC-GQJ/TieFake.Comment: Appear on IJCNN 202