209 research outputs found

    An Exploratory Study of COVID-19 Misinformation on Twitter

    Get PDF
    During the COVID-19 pandemic, social media has become a home ground for misinformation. To tackle this infodemic, scientific oversight, as well as a better understanding by practitioners in crisis management, is needed. We have conducted an exploratory study into the propagation, authors and content of misinformation on Twitter around the topic of COVID-19 in order to gain early insights. We have collected all tweets mentioned in the verdicts of fact-checked claims related to COVID-19 by over 92 professional fact-checking organisations between January and mid-July 2020 and share this corpus with the community. This resulted in 1 500 tweets relating to 1 274 false and 276 partially false claims, respectively. Exploratory analysis of author accounts revealed that the verified twitter handle(including Organisation/celebrity) are also involved in either creating (new tweets) or spreading (retweet) the misinformation. Additionally, we found that false claims propagate faster than partially false claims. Compare to a background corpus of COVID-19 tweets, tweets with misinformation are more often concerned with discrediting other information on social media. Authors use less tentative language and appear to be more driven by concerns of potential harm to others. Our results enable us to suggest gaps in the current scientific coverage of the topic as well as propose actions for authorities and social media users to counter misinformation.Comment: 20 pages, nine figures, four tables. Submitted for peer review, revision

    Factify 2: A Multimodal Fake News and Satire News Dataset

    Full text link
    The internet gives the world an open platform to express their views and share their stories. While this is very valuable, it makes fake news one of our society's most pressing problems. Manual fact checking process is time consuming, which makes it challenging to disprove misleading assertions before they cause significant harm. This is he driving interest in automatic fact or claim verification. Some of the existing datasets aim to support development of automating fact-checking techniques, however, most of them are text based. Multi-modal fact verification has received relatively scant attention. In this paper, we provide a multi-modal fact-checking dataset called FACTIFY 2, improving Factify 1 by using new data sources and adding satire articles. Factify 2 has 50,000 new data instances. Similar to FACTIFY 1.0, we have three broad categories - support, no-evidence, and refute, with sub-categories based on the entailment of visual and textual data. We also provide a BERT and Vison Transformer based baseline, which acheives 65% F1 score in the test set. The baseline codes and the dataset will be made available at https://github.com/surya1701/Factify-2.0.Comment: Defactify@AAAI202

    Findings of Factify 2: Multimodal Fake News Detection

    Full text link
    With social media usage growing exponentially in the past few years, fake news has also become extremely prevalent. The detrimental impact of fake news emphasizes the need for research focused on automating the detection of false information and verifying its accuracy. In this work, we present the outcome of the Factify 2 shared task, which provides a multi-modal fact verification and satire news dataset, as part of the DeFactify 2 workshop at AAAI'23. The data calls for a comparison based approach to the task by pairing social media claims with supporting documents, with both text and image, divided into 5 classes based on multi-modal relations. In the second iteration of this task we had over 60 participants and 9 final test-set submissions. The best performances came from the use of DeBERTa for text and Swinv2 and CLIP for image. The highest F1 score averaged for all five classes was 81.82%.Comment: Defactify2 @AAAI 202

    Combating Misinformation on Social Media by Exploiting Post and User-level Information

    Get PDF
    Misinformation on social media has far-reaching negative impact on the public and society. Given the large number of real-time posts on social media, traditional manual-based methods of misinformation detection are not viable. Therefore, computational approaches (i.e., data-driven) have been proposed to combat online misinformation. Previous work on computational misinformation analysis has mainly focused on employing natural language processing (NLP) techniques to develop misinformation detection systems at the post level (e.g., using text and propagation network). However, it is also important to exploit information at the user level in social media, as users play a significant role (e.g., post, diffuse, refute, etc.) in spreading misinformation. The main aim of this thesis is to: (i) develop novel methods for analysing the behaviour of users who are likely to share or refute misinformation in social media; and (ii) predict and characterise unreliable stories with high popularity in social media. To this end, we first highlight the limitations in the evaluation protocol in popular rumour detection benchmarks on the post level and propose to evaluate such systems using chronological splits (i.e., considering temporal concept drift). On the user level, we introduce two novel tasks on (i) early detecting Twitter users that are likely to share misinformation before they actually do it; and (ii) identifying and characterising active citizens who refute misinformation in social media. Finally, we develop a new dataset to enable the study on predicting the future popularity (e.g. number of likes, replies, retweets) of false rumour on Weibo

    Content-based automatic fact checking

    Full text link
    La diffusion des Fake News sur les réseaux sociaux est devenue un problème central ces dernières années. Notamment, hoaxy rapporte que les efforts de fact checking prennent généralement 10 à 20 heures pour répondre à une fake news, et qu'il y a un ordre de magnitude en plus de fake news que de fact checking. Le fact checking automatique pourrait aider en accélérant le travail humain et en surveillant les tendances dans les fake news. Dans un effort contre la désinformation, nous résumons le domaine de Fact Checking Automatique basé sur le contenu en 3 approches: les modèles avec aucune connaissances externes, les modèles avec un Graphe de Connaissance et les modèles avec une Base de Connaissance. Afin de rendre le Fact Checking Automatique plus accessible, nous présentons pour chaque approche une architecture efficace avec le poids en mémoire comme préoccupation, nous discutons aussi de comment chaque approche peut être appliquée pour faire usage au mieux de leur charactéristiques. Nous nous appuyons notamment sur la version distillée du modèle de langue BERT tinyBert, combiné avec un partage fort des poids sur 2 approches pour baisser l'usage mémoire en préservant la précision.The spreading of fake news on social media has become a concern in recent years. Notably, hoaxy found that fact checking generally takes 10 to 20 hours to respond to a fake news, and that there is one order of magnitude more fake news than fact checking. Automatic fact checking could help by accelerating human work and monitoring trends in fake news. In the effort against disinformation, we summarize content-based automatic fact-checking into 3 approaches: models with no external knowledge, models with a Knowledge Graph and models with a Knowledge Base. In order to make Automatic Fact Checking more accessible, we present for each approach an effective architecture with memory footprint in mind and also discuss how they can be applied to make use of their different characteristics. We notably rely on distilled version of the BERT language model tinyBert, combined with hard parameter sharing on two approaches to lower memory usage while preserving the accuracy
    • …
    corecore