9,604 research outputs found

    An Analysis of Misinformation on Facebook: Causes, Detection, and Mitigation

    Get PDF
    Recent “fake-news†occurrences have raised attention and concern about the social and political impacts of misinformation spreading on the internet. These occurrences include substantially impactful reports surrounding the spread of misinformation related to the 2016 presidential election and the COVID virus, treatments and vaccines. This research focuses on the social media platform Facebook as a catalyst for the spread of misinformation. It explores factors that stimulate and promote discovery and alleviation of misinformation on this platform. This study hopes to contribute to the extant literature on misinformation by providing insight into the techniques used to spread misinformation, misinformation detection methods, and mitigation techniques specifically related to the social media platform Facebook

    The Fake News Spreading Plague: Was it Preventable?

    Get PDF
    In 2010, a paper entitled "From Obscurity to Prominence in Minutes: Political Speech and Real-time search" won the Best Paper Prize of the Web Science 2010 Conference. Among its findings were the discovery and documentation of what was termed a "Twitter-bomb", an organized effort to spread misinformation about the democratic candidate Martha Coakley through anonymous Twitter accounts. In this paper, after summarizing the details of that event, we outline the recipe of how social networks are used to spread misinformation. One of the most important steps in such a recipe is the "infiltration" of a community of users who are already engaged in conversations about a topic, to use them as organic spreaders of misinformation in their extended subnetworks. Then, we take this misinformation spreading recipe and indicate how it was successfully used to spread fake news during the 2016 U.S. Presidential Election. The main differences between the scenarios are the use of Facebook instead of Twitter, and the respective motivations (in 2010: political influence; in 2016: financial benefit through online advertising). After situating these events in the broader context of exploiting the Web, we seize this opportunity to address limitations of the reach of research findings and to start a conversation about how communities of researchers can increase their impact on real-world societal issues

    Psychological interventions countering misinformation in social media : a scoping review : research protocol

    Get PDF
    Introduction: Misinformation is a complex concept and its meaning can encompass several kinds of different phenomena. Liang Wu et el. consider a wide variety of online behavior as misinformation:1 unintentionally spreading false information, intentionally spreading false information, disseminating urban legends, sharing fake news, unverified information, and rumors, as well as crowdturfing, spamming, trolling, and propagating hate speech, or being involved in cyberbullying. The aim of this review is to address the following question: “What psychological interventions countering misinformation can be deployed on popular social media platforms (e.g. Twitter, Facebook)?". In order to address this question, we have designed a systematic scoping review procedure in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. 2,3 Effective measures of countering misinformation on social media are instrumental for facilitating and fostering reliable public conversation about political and social problems. Moreover, countering misinformation on social media platforms can be also considered a public health intervention, especially in the time of health emergencies, such as the COVID-19 pandemic. Methods and analysis: A scoping review is a modern, rigorous approach to synthesis science, developed among others by the Joanna Briggs Institute team. For data extractions, we plan to use the following databases: Embase, Scopus, and PubMed. For paper selection, eligibility criteria were defined

    It’s Always April Fools’ Day! On the Difficulty of Social Network Misinformation Classification via Propagation Features

    Get PDF
    Given the huge impact that Online Social Networks (OSN) had in the way people get informed and form their opinion, they became an attractive playground for malicious entities that want to spread misinformation, and leverage their effect. In fact, misinformation easily spreads on OSN and is a huge threat for modern society, possibly influencing also the outcome of elections, or even putting people’s life at risk (e.g., spreading “anti-vaccines” misinformation). Therefore, it is of paramount importance for our society to have some sort of “validation” on information spreading through OSN. The need for a wide-scale validation would greatly benefit from automatic tools. In this paper, we show that it is difficult to carry out an automatic classification of misinformation considering only structural properties of content propagation cascades. We focus on structural properties, because they would be inherently dif- ficult to be manipulated, with the the aim of circumventing classification systems. To support our claim, we carry out an extensive evaluation on Facebook posts belonging to conspiracy theories (as representative of misinformation), and scientific news (representative of fact-checked content). Our findings show that conspiracy content actually reverberates in a way which is hard to distinguish from the one scientific content does: for the classification mechanisms we investigated, classification F1-score never exceeds 0.65 during content propagation stages, and is still less than 0.7 even after propagation is complete

    False News On Social Media: A Data-Driven Survey

    Full text link
    In the past few years, the research community has dedicated growing interest to the issue of false news circulating on social networks. The widespread attention on detecting and characterizing false news has been motivated by considerable backlashes of this threat against the real world. As a matter of fact, social media platforms exhibit peculiar characteristics, with respect to traditional news outlets, which have been particularly favorable to the proliferation of deceptive information. They also present unique challenges for all kind of potential interventions on the subject. As this issue becomes of global concern, it is also gaining more attention in academia. The aim of this survey is to offer a comprehensive study on the recent advances in terms of detection, characterization and mitigation of false news that propagate on social media, as well as the challenges and the open questions that await future research on the field. We use a data-driven approach, focusing on a classification of the features that are used in each study to characterize false information and on the datasets used for instructing classification methods. At the end of the survey, we highlight emerging approaches that look most promising for addressing false news

    PopRank: Ranking pages' impact and users' engagement on Facebook

    Full text link
    Users online tend to acquire information adhering to their system of beliefs and to ignore dissenting information. Such dynamics might affect page popularity. In this paper we introduce an algorithm, that we call PopRank, to assess both the Impact of Facebook pages as well as users' Engagement on the basis of their mutual interactions. The ideas behind the PopRank are that i) high impact pages attract many users with a low engagement, which means that they receive comments from users that rarely comment, and ii) high engagement users interact with high impact pages, that is they mostly comment pages with a high popularity. The resulting ranking of pages can predict the number of comments a page will receive and the number of its posts. Pages impact turns out to be slightly dependent on pages' informative content (e.g., science vs conspiracy) but independent of users' polarization.Comment: 10 pages, 5 figure

    Online Misinformation: Challenges and Future Directions

    Get PDF
    Misinformation has become a common part of our digital media environments and it is compromising the ability of our societies to form informed opinions. It generates misperceptions, which have affected the decision making processes in many domains, including economy, health, environment, and elections, among others. Misinformation and its generation, propagation, impact, and management is being studied through a variety of lenses (computer science, social science, journalism, psychology, etc.) since it widely affects multiple aspects of society. In this paper we analyse the phenomenon of misinformation from a technological point of view.We study the current socio-technical advancements towards addressing the problem, identify some of the key limitations of current technologies, and propose some ideas to target such limitations. The goal of this position paper is to reflect on the current state of the art and to stimulate discussions on the future design and development of algorithms, methodologies, and applications

    Network segregation in a model of misinformation and fact checking

    Get PDF
    Misinformation under the form of rumor, hoaxes, and conspiracy theories spreads on social media at alarming rates. One hypothesis is that, since social media are shaped by homophily, belief in misinformation may be more likely to thrive on those social circles that are segregated from the rest of the network. One possible antidote is fact checking which, in some cases, is known to stop rumors from spreading further. However, fact checking may also backfire and reinforce the belief in a hoax. Here we take into account the combination of network segregation, finite memory and attention, and fact-checking efforts. We consider a compartmental model of two interacting epidemic processes over a network that is segregated between gullible and skeptic users. Extensive simulation and mean-field analysis show that a more segregated network facilitates the spread of a hoax only at low forgetting rates, but has no effect when agents forget at faster rates. This finding may inform the development of mitigation techniques and overall inform on the risks of uncontrolled misinformation online
    • …
    corecore