203 research outputs found

    Topology comparison of Twitter diffusion networks effectively reveals misleading information

    Full text link
    In recent years, malicious information had an explosive growth in social media, with serious social and political backlashes. Recent important studies, featuring large-scale analyses, have produced deeper knowledge about this phenomenon, showing that misleading information spreads faster, deeper and more broadly than factual information on social media, where echo chambers, algorithmic and human biases play an important role in diffusion networks. Following these directions, we explore the possibility of classifying news articles circulating on social media based exclusively on a topological analysis of their diffusion networks. To this aim we collected a large dataset of diffusion networks on Twitter pertaining to news articles published on two distinct classes of sources, namely outlets that convey mainstream, reliable and objective information and those that fabricate and disseminate various kinds of misleading articles, including false news intended to harm, satire intended to make people laugh, click-bait news that may be entirely factual or rumors that are unproven. We carried out an extensive comparison of these networks using several alignment-free approaches including basic network properties, centrality measures distributions, and network distances. We accordingly evaluated to what extent these techniques allow to discriminate between the networks associated to the aforementioned news domains. Our results highlight that the communities of users spreading mainstream news, compared to those sharing misleading news, tend to shape diffusion networks with subtle yet systematic differences which might be effectively employed to identify misleading and harmful information.Comment: A revised new version is available on Scientific Report

    Reinforcing attitudes in a gatewatching news era: individual-level antecedents to sharing fact-checks on social media

    Full text link
    Despite the prevalence of fact-checking, little is known about who posts fact-checks online. Based upon a content analysis of Facebook and Twitter digital trace data and a linked online survey (N = 783), this study reveals that sharing fact-checks in political conversations on social media is linked to age, ideology, and political behaviors. Moreover, an individual’s need for orientation (NFO) is an even stronger predictor of sharing a fact-check than ideological intensity or relevance, alone, and also influences the type of fact-check format (with or without a rating scale) that is shared. Finally, participants generally shared fact-checks to reinforce their existing attitudes. Consequently, concerns over the effects of fact-checking should move beyond a limited-effects approach (e.g., changing attitudes) to also include reinforcing accurate beliefs.Accepted manuscrip

    Measuring the Interference Effect of Bots in Disseminating Opposing Viewpoints Related to COVID-19 on Twitter Using Epidemiological Modeling

    Get PDF
    The activity of bots can influence the opinions and behavior of people, especially within the political landscape where hot-button issues are debated. To evaluate the bot presence among the propagation trends of opposing politically-charged viewpoints on Twitter, we collected a comprehensive set of hashtags related to COVID-19. We then applied both the SIR (Susceptible, Infected, Recovered) and the SEIZ (Susceptible, Exposed, Infected, Skeptics) epidemiological models to three different dataset states including, total tweets in a dataset, tweets by bots, and tweets by humans. Our results show the ability of both models to model the diffusion of opposing viewpoints on Twitter, with the SEIZ model outperforming the SIR. Additionally, although our results show that both models can model the diffusion of information spread by bots with some difficulty, the SEIZ model outperforms. Our analysis also reveals that the magnitude of the bot-induced diffusion of this type of information varies by subject

    Understanding Bots on Social Media - An Application in Disaster Response

    Get PDF
    abstract: Social media has become a primary platform for real-time information sharing among users. News on social media spreads faster than traditional outlets and millions of users turn to this platform to receive the latest updates on major events especially disasters. Social media bridges the gap between the people who are affected by disasters, volunteers who offer contributions, and first responders. On the other hand, social media is a fertile ground for malicious users who purposefully disturb the relief processes facilitated on social media. These malicious users take advantage of social bots to overrun social media posts with fake images, rumors, and false information. This process causes distress and prevents actionable information from reaching the affected people. Social bots are automated accounts that are controlled by a malicious user and these bots have become prevalent on social media in recent years. In spite of existing efforts towards understanding and removing bots on social media, there are at least two drawbacks associated with the current bot detection algorithms: general-purpose bot detection methods are designed to be conservative and not label a user as a bot unless the algorithm is highly confident and they overlook the effect of users who are manipulated by bots and (unintentionally) spread their content. This study is trifold. First, I design a Machine Learning model that uses content and context of social media posts to detect actionable ones among them; it specifically focuses on tweets in which people ask for help after major disasters. Second, I focus on bots who can be a facilitator of malicious content spreading during disasters. I propose two methods for detecting bots on social media with a focus on the recall of the detection. Third, I study the characteristics of users who spread the content of malicious actors. These features have the potential to improve methods that detect malicious content such as fake news.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Social media, political polarization, and political disinformation: a review of the scientific literature

    Get PDF
    The following report is intended to provide an overview of the current state of the literature on the relationship between social media; political polarization; and political “disinformation,” a term used to encompass a wide range of types of information about politics found online, including “fake news,” rumors, deliberately factually incorrect information, inadvertently factually incorrect information, politically slanted information, and “hyperpartisan” news. The review of the literature is provided in six separate sections, each of which can be read individually but that cumulatively are intended to provide an overview of what is known — and unknown — about the relationship between social media, political polarization, and disinformation. The report concludes by identifying key gaps in our understanding of these phenomena and the data that are needed to address them

    Social media, political polarization, and political disinformation: a review of the scientific literature

    Get PDF
    The following report is intended to provide an overview of the current state of the literature on the relationship between social media; political polarization; and political “disinformation,” a term used to encompass a wide range of types of information about politics found online, including “fake news,” rumors, deliberately factually incorrect information, inadvertently factually incorrect information, politically slanted information, and “hyperpartisan” news. The review of the literature is provided in six separate sections, each of which can be read individually but that cumulatively are intended to provide an overview of what is known — and unknown — about the relationship between social media, political polarization, and disinformation. The report concludes by identifying key gaps in our understanding of these phenomena and the data that are needed to address them

    On Left and Right: Understanding the Discourse of Presidential Election in Social Media Communities

    Get PDF
    As a promising platform for political discourse, social media becomes a battleground for presidential candidates as well as their supporters and opponents. Stance detection is one of the key tasks in the understanding of political discourse. However, existing methods are dominated by supervised techniques, which require labeled data. Previous work on stance detection is largely conducted at the post or user level. Despite that some studies have considered online political communities, they either only select a few communities or assume the stance coherence of these communities. Political party extraction has rarely been addressed explicitly. To address the limitations, we developed an unsupervised learning approach to political party extraction and stance detection from social media discourse. We also analyzed and compared (sub)communities with respect to their characteristics of political stances and parties. We further explored (sub)communities’ shift in political stance after the 2020 US presidential election

    How Misinformation Spreads Through Twitter

    Full text link
    While living in the age of information, an inherent drawback to such high exposure to content lends itself to the precarious rise of misinformation. Whether it is called “alternative facts,” “fake news,” or just incorrect information, because of its pervasiveness in nearly every political and policy discussion, the spread of misinformation is seen as one of the greatest challenges to overcome in the 21st century. As new technologies emerge, a major piece of both content creation and the perpetuation of misinformation are social media platforms like Twitter, Facebook, and YouTube. As news events emerge, whether be a pandemic, a mass shooting, or an election campaign, it is difficult to divulge the facts from fiction when so many different “facts” appear. This study looks at 14,545,945 tweets generated in the wake of the 1 October mass shooting and its second anniversary to identify how much of the public response is fogged by information pollution, to identify what kind of misinformation is spread and how it spreads on Twitter and news coverage
    • …
    corecore