The internet gives the world an open platform to express their views and
share their stories. While this is very valuable, it makes fake news one of our
society's most pressing problems. Manual fact checking process is time
consuming, which makes it challenging to disprove misleading assertions before
they cause significant harm. This is he driving interest in automatic fact or
claim verification. Some of the existing datasets aim to support development of
automating fact-checking techniques, however, most of them are text based.
Multi-modal fact verification has received relatively scant attention. In this
paper, we provide a multi-modal fact-checking dataset called FACTIFY 2,
improving Factify 1 by using new data sources and adding satire articles.
Factify 2 has 50,000 new data instances. Similar to FACTIFY 1.0, we have three
broad categories - support, no-evidence, and refute, with sub-categories based
on the entailment of visual and textual data. We also provide a BERT and Vison
Transformer based baseline, which acheives 65% F1 score in the test set. The
baseline codes and the dataset will be made available at
https://github.com/surya1701/Factify-2.0.Comment: Defactify@AAAI202