18 research outputs found

    Network segregation in a model of misinformation and fact checking

    Get PDF
    Misinformation under the form of rumor, hoaxes, and conspiracy theories spreads on social media at alarming rates. One hypothesis is that, since social media are shaped by homophily, belief in misinformation may be more likely to thrive on those social circles that are segregated from the rest of the network. One possible antidote is fact checking which, in some cases, is known to stop rumors from spreading further. However, fact checking may also backfire and reinforce the belief in a hoax. Here we take into account the combination of network segregation, finite memory and attention, and fact-checking efforts. We consider a compartmental model of two interacting epidemic processes over a network that is segregated between gullible and skeptic users. Extensive simulation and mean-field analysis show that a more segregated network facilitates the spread of a hoax only at low forgetting rates, but has no effect when agents forget at faster rates. This finding may inform the development of mitigation techniques and overall inform on the risks of uncontrolled misinformation online

    Studying Fake News via Network Analysis: Detection and Mitigation

    Full text link
    Social media for news consumption is becoming increasingly popular due to its easy access, fast dissemination, and low cost. However, social media also enable the wide propagation of "fake news", i.e., news with intentionally false information. Fake news on social media poses significant negative societal effects, and also presents unique challenges. To tackle the challenges, many existing works exploit various features, from a network perspective, to detect and mitigate fake news. In essence, news dissemination ecosystem involves three dimensions on social media, i.e., a content dimension, a social dimension, and a temporal dimension. In this chapter, we will review network properties for studying fake news, introduce popular network types and how these networks can be used to detect and mitigation fake news on social media.Comment: Submitted as a invited book chapter in Lecture Notes in Social Networks, Springer Pres

    Structural power and the evolution of collective fairness in social networks

    Get PDF
    From work contracts and group buying platforms to political coalitions and international climate and economical summits, often individuals assemble in groups that must collectively reach decisions that may favor each part unequally. Here we quantify to which extent our network ties promote the evolution of collective fairness in group interactions, modeled by means of Multiplayer Ultimatum Games (MUG). We show that a single topological feature of social networks-which we call structural power-has a profound impact on the tendency of individuals to take decisions that favor each part equally. Increased fair outcomes are attained whenever structural power is high, such that the networks that tie individuals allow them to meet the same partners in different groups, thus providing the opportunity to strongly influence each other. On the other hand, the absence of such close peer-influence relationships dismisses any positive effect created by the network. Interestingly, we show that increasing the structural power of a network leads to the appearance of well-defined modules-as found in human social networks that often exhibit community structure-providing an interaction environment that maximizes collective fairness.This research was supported by Fundacao para a Ciencia e Tecnologia (FCT) through grants SFRH/BD/94736/2013, PTDC/EEI-SII/5081/2014, PTDC/MAT/STA/3358/2014 and by multi-annual funding of CBMA and INESC-ID (under the projects UID/BIA/04050/2013 and UID/CEC/50021/2013) provided by FCT. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.info:eu-repo/semantics/publishedVersio

    Fact Checking in Knowledge Graphs with Ontological Subgraph Patterns

    No full text

    The Fake News Vaccine - A Content-Agnostic System for Preventing Fake News from Becoming Viral.

    No full text
    International audienceWhile spreading fake news is an old phenomenon, today social media enables misinformation to instantaneously reach millions of people. Content-based approaches to detect fake news, typically based on automatic text checking, are limited. It is indeed difficult to come up with general checking criteria. Moreover, once the criteria are known to an adversary, the checking can be easily bypassed. On the other hand, it is practically impossible for humans to check every news item, let alone preventing them from becoming viral.We present Credulix, the first content-agnostic system to prevent fake news from going viral. Credulix is implemented as a plugin on top of a social media platform and acts as a vaccine. Human fact-checkers review a small number of popular news items, which helps us estimate the inclination of each user to share fake news. Using the resulting information, we automatically estimate the probability that an unchecked news item is fake. We use a Bayesian approach that resembles Condorcet’s Theorem to compute this probability. We show how this computation can be performed in an incremental, and hence fast manner

    ClaimsKG: A Knowledge Graph of Fact-Checked Claims

    No full text
    International audienceVarious research areas at the intersection of computer and social sciences require a ground truth of contextualized claims labelled with their truth values in order to facilitate supervision, validation or reproducibility of approaches dealing, for example, with fact-checking or analysis of societal debates. So far, no reasonably large, up-to-date and queryable corpus of structured information about claims and related metadata is publicly available. In an attempt to fill this gap, we introduce ClaimsKG, a knowledge graph of fact-checked claims, which facilitates structured queries about their truth values, authors, dates, journalistic reviews and other kinds of metadata. ClaimsKG is generated through a semi-automated pipeline, which harvests data from popular fact-checking websites on a regular basis, annotates claims with related entities from DBpedia, and lifts the data to RDF using an RDF/S model that makes use of established vocabularies. In order to harmonise data originating from diverse fact-checking sites, we introduce normalised ratings as well as a simple claims coreference resolution strategy. The current knowledge graph, extensible to new information, consists of 28,383 claims published since 1996, amounting to 6,606,032 triples
    corecore