17 research outputs found
Findings of Factify 2: Multimodal Fake News Detection
With social media usage growing exponentially in the past few years, fake
news has also become extremely prevalent. The detrimental impact of fake news
emphasizes the need for research focused on automating the detection of false
information and verifying its accuracy. In this work, we present the outcome of
the Factify 2 shared task, which provides a multi-modal fact verification and
satire news dataset, as part of the DeFactify 2 workshop at AAAI'23. The data
calls for a comparison based approach to the task by pairing social media
claims with supporting documents, with both text and image, divided into 5
classes based on multi-modal relations. In the second iteration of this task we
had over 60 participants and 9 final test-set submissions. The best
performances came from the use of DeBERTa for text and Swinv2 and CLIP for
image. The highest F1 score averaged for all five classes was 81.82%.Comment: Defactify2 @AAAI 202
Factify 2: A Multimodal Fake News and Satire News Dataset
The internet gives the world an open platform to express their views and
share their stories. While this is very valuable, it makes fake news one of our
society's most pressing problems. Manual fact checking process is time
consuming, which makes it challenging to disprove misleading assertions before
they cause significant harm. This is he driving interest in automatic fact or
claim verification. Some of the existing datasets aim to support development of
automating fact-checking techniques, however, most of them are text based.
Multi-modal fact verification has received relatively scant attention. In this
paper, we provide a multi-modal fact-checking dataset called FACTIFY 2,
improving Factify 1 by using new data sources and adding satire articles.
Factify 2 has 50,000 new data instances. Similar to FACTIFY 1.0, we have three
broad categories - support, no-evidence, and refute, with sub-categories based
on the entailment of visual and textual data. We also provide a BERT and Vison
Transformer based baseline, which acheives 65% F1 score in the test set. The
baseline codes and the dataset will be made available at
https://github.com/surya1701/Factify-2.0.Comment: Defactify@AAAI202
Overview of Memotion 3: Sentiment and Emotion Analysis of Codemixed Hinglish Memes
Analyzing memes on the internet has emerged as a crucial endeavor due to the
impact this multi-modal form of content wields in shaping online discourse.
Memes have become a powerful tool for expressing emotions and sentiments,
possibly even spreading hate and misinformation, through humor and sarcasm. In
this paper, we present the overview of the Memotion 3 shared task, as part of
the DeFactify 2 workshop at AAAI-23. The task released an annotated dataset of
Hindi-English code-mixed memes based on their Sentiment (Task A), Emotion (Task
B), and Emotion intensity (Task C). Each of these is defined as an individual
task and the participants are ranked separately for each task. Over 50 teams
registered for the shared task and 5 made final submissions to the test set of
the Memotion 3 dataset. CLIP, BERT modifications, ViT etc. were the most
popular models among the participants along with approaches such as
Student-Teacher model, Fusion, and Ensembling. The best final F1 score for Task
A is 34.41, Task B is 79.77 and Task C is 59.82.Comment: Defactify2 @AAAI 202
The Unusual Suspects: Deep Learning Based Mining of Interesting Entity Trivia from Knowledge Graphs
Trivia is any fact about an entity which is interesting due to its unusualness, uniqueness or unexpectedness. Trivia could be successfully employed to promote user engagement in various product experiences featuring the given entity. A Knowledge Graph (KG) is a semantic network which encodes various facts about entities and their relationships. In this paper, we propose a novel approach called DBpedia Trivia Miner (DTM) to automatically mine trivia for entities of a given domain in KGs. The essence of DTM lies in learning an Interestingness Model (IM), for a given domain, from human annotated training data provided in the form of interesting facts from the KG. The IM thus learnt is applied to extract trivia for other entities of the same domain in the KG. We propose two different approaches for learning the IM - a) A Convolutional Neural Network (CNN) based approach and b) Fusion Based CNN (F-CNN) approach which combines both hand-crafted and CNN features. Experiments across two different domains - Bollywood Actors and Music Artists reveal that CNN automatically learns features which are relevant to the task and shows competitive performance relative to hand-crafted feature based baselines whereas F-CNN significantly improves the performance over the baseline approaches which use hand-crafted features alone. Overall, DTM achieves an F1 score of 0.81 and 0.65 in Bollywood Actors and Music Artists domains respectively