42,000 research outputs found

    The Alt-Right and Global Information Warfare

    Get PDF
    The Alt-Right is a neo-fascist white supremacist movement that is involved in violent extremism and shows signs of engagement in extensive disinformation campaigns. Using social media data mining, this study develops a deeper understanding of such targeted disinformation campaigns and the ways they spread. It also adds to the available literature on the endogenous and exogenous influences within the US far right, as well as motivating factors that drive disinformation campaigns, such as geopolitical strategy. This study is to be taken as a preliminary analysis to indicate future methods and follow-on research that will help develop an integrated approach to understanding the strategies and associations of the modern fascist movement.Comment: Presented and published through IEEE 2019 Big Data Conferenc

    Tackling the spread of disinformation Why a co-regulatory approach is the right way forward for the EU. Bertelsmann Stiftung Policy Paper 12 December 2019

    Get PDF
    In recent years social media platforms have led to an unprecedented increase in the spread of disinformation. Concerns about these new and dynamic ways to spread falsehoods have brought politicians and regulators onto the stage. In this paper Paul-Jasper Dittrich proposes a European co-regulatory approach to tackle disinformation on social media instead of the current self-regulatory approach or direct regulation

    What is disinformation?

    Get PDF
    Prototypical instances of disinformation include deceptive advertising (in business and in politics), government propaganda, doctored photographs, forged documents, fake maps, internet frauds, fake websites, and manipulated Wikipedia entries. Disinformation can cause significant harm if people are misled by it. In order to address this critical threat to information quality, we first need to understand exactly what disinformation is. This paper surveys the various analyses of this concept that have been proposed by information scientists and philosophers (most notably, Luciano Floridi). It argues that these analyses are either too broad (that is, that they include things that are not disinformation), or too narrow (they exclude things that are disinformation), or both. Indeed, several of these analyses exclude important forms of disinformation, such as true disinformation, visual disinformation, side-effect disinformation, and adaptive disinformation. After considering the shortcomings of these analyses, the paper argues that disinformation is misleading information that has the function of misleading. Finally, in addition to responding to Floridi’s claim that such a precise analysis of disinformation is not necessary, it briefly discusses how this analysis can help us develop techniques for detecting disinformation and policies for deterring its spread

    Conferring resistance to digital disinformation: the innoculating influence of procedural news knowledge

    Get PDF
    Despite the pervasiveness of digital disinformation in society, little is known about the individual characteristics that make some users more susceptible to erroneous information uptake than others, effectively dividing the media audience into prone and resistant groups. This study identifies and tests procedural news knowledge as a consequential civic resource with the capacity to inoculate audiences from disinformation and close this “resistance gap.” Engaging the persuasion knowledge model, the study utilizes data from two national surveys to demonstrate that possessing working knowledge of how the news media operate aids in the identification and effects of fabricated news and native advertising.Accepted manuscrip

    Topology comparison of Twitter diffusion networks effectively reveals misleading information

    Full text link
    In recent years, malicious information had an explosive growth in social media, with serious social and political backlashes. Recent important studies, featuring large-scale analyses, have produced deeper knowledge about this phenomenon, showing that misleading information spreads faster, deeper and more broadly than factual information on social media, where echo chambers, algorithmic and human biases play an important role in diffusion networks. Following these directions, we explore the possibility of classifying news articles circulating on social media based exclusively on a topological analysis of their diffusion networks. To this aim we collected a large dataset of diffusion networks on Twitter pertaining to news articles published on two distinct classes of sources, namely outlets that convey mainstream, reliable and objective information and those that fabricate and disseminate various kinds of misleading articles, including false news intended to harm, satire intended to make people laugh, click-bait news that may be entirely factual or rumors that are unproven. We carried out an extensive comparison of these networks using several alignment-free approaches including basic network properties, centrality measures distributions, and network distances. We accordingly evaluated to what extent these techniques allow to discriminate between the networks associated to the aforementioned news domains. Our results highlight that the communities of users spreading mainstream news, compared to those sharing misleading news, tend to shape diffusion networks with subtle yet systematic differences which might be effectively employed to identify misleading and harmful information.Comment: A revised new version is available on Scientific Report

    Correcting Fear-arousing Disinformation on Social Media in the Spread of a Health Virus: A Focus on Situational Fear, Situational Threat Appraisal, Belief in Disinformation, and Intention to Spread Disinformation on Social Media

    Get PDF
    Disinformation is prevalent in the current social media environment and circulated just as quickly as truthful information. Research has investigated what motivates the spread of disinformation and how to combat it. However, limited research focuses on how fear-arousing disinformation during crises affects individuals’ belief in disinformation and to what extent corrective information can subdue the persuasive effects of fear-arousing disinformation. To address this gap, this research tests the effects of fear-arousing disinformation and different types of corrective information (i.e., no corrective information, simple corrective information, or narrative corrective information) on belief in disinformation and intentions to spread disinformation on social media, during a crisis—the spread of an unknown health virus. Furthermore, adapting the important roles of situational fear and threat appraisal in predicting people’s health behavioral changes, this research examines the underlying psychological mechanisms of fear and threat appraisal of a crisis in the effects of fear-arousing disinformation and different types of corrective information on belief in disinformation and intentions to spread disinformation on social media. Study 1 tests the interaction between fear-arousing disinformation and the presence of corrective information. Therefore, a 2 by 2 experiment was conducted in Study 1: disinformation (fear-neutral disinformation vs. fear-arousing disinformation) × corrective information (no corrective information vs. simple corrective information). Study 2 advances Study 1 by testing whether narrative corrective information decreases belief in disinformation. Study 2 conducted a 2 by 2 experiment (disinformation: fear-neutral disinformation vs. fear-arousing disinformation × corrective information: simple corrective information vs. narrative corrective information). A total of 419 data collected between January and February 2019 from Amazon MTurk were analyzed (205 for Study 1 and 214 for Study 2). The current research notes several key findings: 1) Fear-arousing disinformation does not make people believe the disinformation under risky situations and it can even make people avoid the disinformation content as a coping strategy when there is no corrective information presented. 2) Simple corrective information serves as an effective corrective information strategy when fear-neutral disinformation is shown but can backfire when fear-arousing disinformation is presented. 3) Corrective information that features individual narratives does not differ from simple alerts on their abilities to reduce misperceptions, situational fear, situational threat appraisal, and intentions to spread disinformation on social media. 4) Across individual differences, social media usage (i.e., social media use for news, social media use for fact-finding, and social media use for social interaction, health blog usage) emerges as significant factors that decide disinformation and corrective information processing. By testing effects of disinformation in terms of fear-arousal, which reflects a crisis of the spread of a health virus, this research addressed how fear-arousing disinformation and different forms of corrective information affect beliefs in disinformation and willingness to spread disinformation on social media, and how situational fear and situational threat appraisal may play their roles in the belief in disinformation mechanism
    • …
    corecore