24,419 research outputs found

    On the Role of Images for Analyzing Claims in Social Media

    Get PDF
    Fake news is a severe problem in social media. In this paper, we present an empirical study on visual, textual, and multimodal models for the tasks of claim, claim check-worthiness, and conspiracy detection, all of which are related to fake news detection. Recent work suggests that images are more influential than text and often appear alongside fake text. To this end, several multimodal models have been proposed in recent years that use images along with text to detect fake news on social media sites like Twitter. However, the role of images is not well understood for claim detection, specifically using transformer-based textual and multimodal models. We investigate state-of-the-art models for images, text (Transformer-based), and multimodal information for four different datasets across two languages to understand the role of images in the task of claim and conspiracy detection

    eGR-518 A Multi-Model Approach for Detecting and Combating Fake News

    Get PDF
    Internet plays a vital role in our daily lives, we use it for various purposes and benefit from advancements in technology and social media. However, the same platforms which make global information exchange also promote spread of fake news,raising a significant threat. To resist this issue, fact checking has become important, leading to extensive research to identify fake news and deal problems arising with them. Our project’s mission is to find the most effective model for fake news detection. We explore different approaches and models, like BERT, Decision Trees, Logistic Regression, and Ada Boost classification and evaluate their performance by calculating accuracy, precision, recall, and more. We aim to provide valuable insights on this critical fake news issue and show the best performing model among the pool of models

    Machine Learning Explanations to Prevent Overtrust in Fake News Detection

    Full text link
    Combating fake news and misinformation propagation is a challenging task in the post-truth era. News feed and search algorithms could potentially lead to unintentional large-scale propagation of false and fabricated information with users being exposed to algorithmically selected false content. Our research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news. We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms to study the effects of algorithmic transparency on end-users. We present evaluation results and analysis from multiple controlled crowdsourced studies. For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining. The study results indicate that explanations helped participants to build appropriate mental models of the intelligent assistants in different conditions and adjust their trust accordingly for model limitations

    Detection and Identification of Fake News: Binary Content Classification with Pre-trained Language Models

    Get PDF
    Fake news has emerged as a critical problem for society and professional journalism. Many individuals consume their news via online media, such as social networks and news websites. Therefore, the demand for automatic fake news detection is increasing. There is still no agreed upon definition for fake news, since it can include various concepts, such as clickbait, propaganda, satire, hoaxes, and rumors. This results in a broad landscape of machine learning approaches, which have a varying accuracy in detecting fake news. This masterthesis focused on a binary content-based classification approach, with a bidirectional Transformer ( BERT ), to detect fake news in online articles. BERT creates a pretrained language model during training and is fine-tuned on a labeled dataset. The FakeNewsNet dataset is used to test two variants of the model (cased / uncased) with articles, using only the body text, the title, and a concatenation of both. Additionally, both models were tested with different preprocessing steps. The models gain in all 29 carried out experiments high accuracy results, without overfitting. Using the body text and the concatenation resulted in five models with an accuracy of 87% after testing, whereas using only titles resulted in 84%. This shows that short statements could be already enough for fake news detection using language models. Also, the preprocessing steps seem to have no major impact on the predictions. It is concluded that transformer models, such as BERT , are a promising approach to detect fake news, since it achieves notable results, even without using a large dataset

    fIlfA: Modelado computacional de la desinformación

    Get PDF
    La propagación fake news es cada vez un mayor problema en la sociedad actual. Cada vez el lector promedio tiene menos tiempo para poder verificar con certeza la veracidad de una noticia, lo cual hace necesario la creación de un sistema para la detección de fake news. Por tanto en este trabajo se realiza un estudio sobre la detección de fake news y se presentan 2 sistemas para la detección de fake-news, uno en español y otro en inglés. Para ello se utilizarán técnicas de Procesamiento del Lenguaje Natural, más en concreto se aplicará la técnica de fine-tuning sobre distintos modelos de BERT y RoBERTa al ser los modelos de lenguaje basados en transformers los modelos de deep learning que representan el estado del arte para tareas de clasificación de texto.The spread of fake news is a growing problem in today’s society. The average reader has less time to be able to verify with certainty the veracity of a news item, which makes necessary the creation of a system for the detection of fake news. Therefore, in this work a study on fake news detection is carried out and two systems for fake news detection are presented, one in Spanish and the other in English. For this purpose, Natural Language Processing techniques will be used, more specifically, the fine-tuning technique will be applied on different BERT and RoBERTa models, since transformer-based language models are the deep learning models that represent the state of the art for text classification tasks
    • …
    corecore