3,674 research outputs found

    Exploring Roles of Emotion in Fake News Detection

    Get PDF
    Detecting fake news is becoming widely acknowledged as a critical activity with significant implications for social impact. As fake news tends to evoke high-activating emotions from audiences, the role of emotions in identifying fake news is still under-explored. Existing research made efforts in examining effective representations of emotions conveyed in the news content to help discern the veracity of the news. However, the aroused emotions from the audience are usually ignored. This paper first demonstrates effective representations of emotions within both news content and users’ comments. Furthermore, we propose an emotion-aware fake news detection framework that seamlessly incorporates emotion features to enhance the accuracy of identifying fake news. Future work will include thorough experiments to prove that the proposed framework with the emotions expressed in news and users’ comments improves fake news detection performance

    Detecting Misinformation with LLM-Predicted Credibility Signals and Weak Supervision

    Full text link
    Credibility signals represent a wide range of heuristics that are typically used by journalists and fact-checkers to assess the veracity of online content. Automating the task of credibility signal extraction, however, is very challenging as it requires high-accuracy signal-specific extractors to be trained, while there are currently no sufficiently large datasets annotated with all credibility signals. This paper investigates whether large language models (LLMs) can be prompted effectively with a set of 18 credibility signals to produce weak labels for each signal. We then aggregate these potentially noisy labels using weak supervision in order to predict content veracity. We demonstrate that our approach, which combines zero-shot LLM credibility signal labeling and weak supervision, outperforms state-of-the-art classifiers on two misinformation datasets without using any ground-truth labels for training. We also analyse the contribution of the individual credibility signals towards predicting content veracity, which provides new valuable insights into their role in misinformation detection

    The Impact of Emotional Signals on Credibility Assessment

    Full text link
    [EN] Fake news is considered one of the main threats of our society. The aim of fake news is usually to confuse readers and trigger intense emotions to them in an attempt to be spread through social networks. Even though recent studies have explored the effectiveness of different linguistic patterns for fake news detection, the role of emotional signals has not yet been explored. In this paper, we focus on extracting emotional signals from claims and evaluating their effectiveness on credibility assessment. First, we explore different methodologies for extracting the emotional signals that can be triggered to the users when they read a claim. Then, we present emoCred, a model that is based on a long-short term memory model that incorporates emotional signals extracted from the text of the claims to differentiate between credible and non-credible ones. In addition, we perform an analysis to understand which emotional signals and which terms are the most useful for the different credibility classes. We conduct extensive experiments and a thorough analysis on real-world datasets. Our results indicate the importance of incorporating emotional signals in the credibility assessment problem.Generalitat Valenciana, Grant/Award Number: DeepPattern (PROMETEO/2019/121); Ministerio de Ciencia e Innovacion, Grant/Award Number: PGC2018-096212-B-C31; Schweizerischer Nationalfonds zur Forderung der Wissenschaftlichen Forschung, Grant/Award Number: P2TIP2_181441Anastasia Giachanou; Rosso, P.; Crestani, F. (2021). The Impact of Emotional Signals on Credibility Assessment. Journal of the Association for Information Science and Technology (Online). 72(9):1117-1132. https://doi.org/10.1002/asi.244801117113272

    No Place to Hide: Dual Deep Interaction Channel Network for Fake News Detection based on Data Augmentation

    Full text link
    Online Social Network (OSN) has become a hotbed of fake news due to the low cost of information dissemination. Although the existing methods have made many attempts in news content and propagation structure, the detection of fake news is still facing two challenges: one is how to mine the unique key features and evolution patterns, and the other is how to tackle the problem of small samples to build the high-performance model. Different from popular methods which take full advantage of the propagation topology structure, in this paper, we propose a novel framework for fake news detection from perspectives of semantic, emotion and data enhancement, which excavates the emotional evolution patterns of news participants during the propagation process, and a dual deep interaction channel network of semantic and emotion is designed to obtain a more comprehensive and fine-grained news representation with the consideration of comments. Meanwhile, the framework introduces a data enhancement module to obtain more labeled data with high quality based on confidence which further improves the performance of the classification model. Experiments show that the proposed approach outperforms the state-of-the-art methods

    MISMIS: Desinformación y agresividad en los medios de comunicación social: agregando información y analizando el lenguaje

    Get PDF
    [EN] The general objectives of the project are to address and monitor misinformation (biased and fake news) and miscommunication (aggressive language and hate speech) in social media, as well as to establish a high quality methodological standard for the whole research community (i) by developing rich annotated datasets, a data repository and online evaluation services; (ii) by proposing suitable evaluation metrics; and (iii) by organizing evaluation campaigns to foster research on the above issues.[ES] Los objetivos generales del proyecto son abordar y monitorizar la desinformación (noticias sesgadas y falsas) y la mala comunicación (lenguaje agresivo y mensajes de odio) en los medios de comunicación social, así como establecer un estándar metodológico de calidad para toda la comunidad investigadora mediante: i) el desarrollo de datasets anotados, un repositorio de datos y servicios de evaluación online; ii) la propuesta de métricas de evaluación adecuadas; y iii) la organización de campañas de evaluación para fomentar la investigación sobre las cuestiones mencionadas.The MISMIS project (PGC2018-096212-B) is funded by the Spanish Ministry of Science, Innovation and Universities.Rosso, P.; Casacuberta Nolla, F.; Gonzalo, J.; Plaza, L.; Carrillo, J.; Amigó, E.; Verdejo, MF.... (2020). MISMIS: Misinformation and Miscommunication in social media: aggregating information and analysing language. Procesamiento del Lenguaje Natural. (65):101-104. https://doi.org/10.26342/2020-65-13S1011046
    corecore