97 research outputs found

    Sentiment Analysis for Fake News Detection

    Get PDF
    [Abstract] In recent years, we have witnessed a rise in fake news, i.e., provably false pieces of information created with the intention of deception. The dissemination of this type of news poses a serious threat to cohesion and social well-being, since it fosters political polarization and the distrust of people with respect to their leaders. The huge amount of news that is disseminated through social media makes manual verification unfeasible, which has promoted the design and implementation of automatic systems for fake news detection. The creators of fake news use various stylistic tricks to promote the success of their creations, with one of them being to excite the sentiments of the recipients. This has led to sentiment analysis, the part of text analytics in charge of determining the polarity and strength of sentiments expressed in a text, to be used in fake news detection approaches, either as a basis of the system or as a complementary element. In this article, we study the different uses of sentiment analysis in the detection of fake news, with a discussion of the most relevant elements and shortcomings, and the requirements that should be met in the near future, such as multilingualism, explainability, mitigation of biases, or treatment of multimedia elements.Xunta de Galicia; ED431G 2019/01Xunta de Galicia; ED431C 2020/11This work has been funded by FEDER/Ministerio de Ciencia, Innovación y Universidades — Agencia Estatal de Investigación through the ANSWERASAP project (TIN2017-85160-C2-1-R); and by Xunta de Galicia through a Competitive Reference Group grant (ED431C 2020/11). CITIC, as Research Center of the Galician University System, is funded by the Consellería de Educación, Universidade e Formación Profesional of the Xunta de Galicia through the European Regional Development Fund (ERDF/FEDER) with 80%, the Galicia ERDF 2014-20 Operational Programme, and the remaining 20% from the Secretaría Xeral de Universidades (ref. ED431G 2019/01). David Vilares is also supported by a 2020 Leonardo Grant for Researchers and Cultural Creators from the BBVA Foundation. Carlos Gómez-Rodríguez has also received funding from the European Research Council (ERC), under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, grant No. 714150

    Mapping (Dis-)Information Flow about the MH17 Plane Crash

    Get PDF
    Digital media enables not only fast sharing of information, but also disinformation. One prominent case of an event leading to circulation of disinformation on social media is the MH17 plane crash. Studies analysing the spread of information about this event on Twitter have focused on small, manually annotated datasets, or used proxys for data annotation. In this work, we examine to what extent text classifiers can be used to label data for subsequent content analysis, in particular we focus on predicting pro-Russian and pro-Ukrainian Twitter content related to the MH17 plane crash. Even though we find that a neural classifier improves over a hashtag based baseline, labeling pro-Russian and pro-Ukrainian content with high precision remains a challenging problem. We provide an error analysis underlining the difficulty of the task and identify factors that might help improve classification in future work. Finally, we show how the classifier can facilitate the annotation task for human annotators

    Vers une analyse des rumeurs dans les réseaux sociaux basée sur la véracité des images : état de l'art

    Get PDF
    National audienceLe développement rapide des réseaux sociaux a favorisé l'échange d'une masse de données importante, mais aussi la propagation de fausses informations. De nombreux travaux se sont intéressés à la détection des rumeurs, basés principalement sur l'analyse du contenu textuel des messages. Cependant, le contenu visuel, notamment les images, demeure ignoré ou peu exploité. Or, les données visuelles sont très répandues sur les médias sociaux et leur exploitation s'avère être importante pour analyser les rumeurs. Dans cet article, nous présentons une synthèse de l'état de l'art des travaux relatifs à la classi?cation des rumeurs et résumons les tâches principales de ce processus, ainsi que les approches suivies pour analyser ce phénomène. Nous nous focalisons plus particulièrement sur les techniques adoptées pour véri?er la véracité des images. Nous discutons également les jeux de données utilisés pour l'analyse des rumeurs et présentons les pistes de recherche que nous comptons explorer.Le développement rapide des réseaux sociaux a favorisé l'échange d'une masse de données importante, mais aussi la propagation de fausses informations. De nombreux travaux se sont intéressés à la détection des rumeurs, basés principalement sur l'analyse du contenu textuel des messages. Cependant, le contenu visuel, notamment les images, demeure ignoré ou peu exploité. Or, les données visuelles sont très répandues sur les médias sociaux et leur exploitation s'avère être importante pour analyser les rumeurs. Dans cet article, nous présentons une synthèse de l'état de l'art des travaux relatifs à la classification des rumeurs et résumons les tâches principales de ce processus, ainsi que les approches suivies pour analyser ce phénomène. Nous nous focalisons plus particulièrement sur les techniques adoptées pour vérifier la véracité des images. Nous discutons également les jeux de données utilisés pour l'analyse des rumeurs et présentons les pistes de recherche que nous comptons explorer

    Detecting and Grounding Multi-Modal Media Manipulation and Beyond

    Full text link
    Misinformation has become a pressing issue. Fake media, in both visual and textual forms, is widespread on the web. While various deepfake detection and text fake news detection methods have been proposed, they are only designed for single-modality forgery based on binary classification, let alone analyzing and reasoning subtle forgery traces across different modalities. In this paper, we highlight a new research problem for multi-modal fake media, namely Detecting and Grounding Multi-Modal Media Manipulation (DGM^4). DGM^4 aims to not only detect the authenticity of multi-modal media, but also ground the manipulated content, which requires deeper reasoning of multi-modal media manipulation. To support a large-scale investigation, we construct the first DGM^4 dataset, where image-text pairs are manipulated by various approaches, with rich annotation of diverse manipulations. Moreover, we propose a novel HierArchical Multi-modal Manipulation rEasoning tRansformer (HAMMER) to fully capture the fine-grained interaction between different modalities. HAMMER performs 1) manipulation-aware contrastive learning between two uni-modal encoders as shallow manipulation reasoning, and 2) modality-aware cross-attention by multi-modal aggregator as deep manipulation reasoning. Dedicated manipulation detection and grounding heads are integrated from shallow to deep levels based on the interacted multi-modal information. To exploit more fine-grained contrastive learning for cross-modal semantic alignment, we further integrate Manipulation-Aware Contrastive Loss with Local View and construct a more advanced model HAMMER++. Finally, we build an extensive benchmark and set up rigorous evaluation metrics for this new research problem. Comprehensive experiments demonstrate the superiority of HAMMER and HAMMER++.Comment: Extension of our CVPR 2023 paper: arXiv:2304.02556 Code: https://github.com/rshaojimmy/MultiModal-DeepFak
    • …
    corecore