12,273 research outputs found

    Network segregation in a model of misinformation and fact checking

    Get PDF
    Misinformation under the form of rumor, hoaxes, and conspiracy theories spreads on social media at alarming rates. One hypothesis is that, since social media are shaped by homophily, belief in misinformation may be more likely to thrive on those social circles that are segregated from the rest of the network. One possible antidote is fact checking which, in some cases, is known to stop rumors from spreading further. However, fact checking may also backfire and reinforce the belief in a hoax. Here we take into account the combination of network segregation, finite memory and attention, and fact-checking efforts. We consider a compartmental model of two interacting epidemic processes over a network that is segregated between gullible and skeptic users. Extensive simulation and mean-field analysis show that a more segregated network facilitates the spread of a hoax only at low forgetting rates, but has no effect when agents forget at faster rates. This finding may inform the development of mitigation techniques and overall inform on the risks of uncontrolled misinformation online

    $1.00 per RT #BostonMarathon #PrayForBoston: analyzing fake content on Twitter

    Get PDF
    This study found that 29% of the most viral content on Twitter during the Boston bombing crisis were rumors and fake content.AbstractOnline social media has emerged as one of the prominent channels for dissemination of information during real world events. Malicious content is posted online during events, which can result in damage, chaos and monetary losses in the real world. We analyzed one such media i.e. Twitter, for content generated during the event of Boston Marathon Blasts, that occurred on April, 15th, 2013. A lot of fake content and malicious profiles originated on Twitter network during this event. The aim of this work is to perform in-depth characterization of what factors influenced in malicious content and profiles becoming viral. Our results showed that 29% of the most viral content on Twitter, during the Boston crisis were rumors and fake content; while 51% was generic opinions and comments; and rest was true information. We found that large number of users with high social reputation and verified accounts were responsible for spreading the fake content. Next, we used regression prediction model, to verify that, overall impact of all users who propagate the fake content at a given time, can be used to estimate the growth of that content in future. Many malicious accounts were created on Twitter during the Boston event, that were later suspended by Twitter. We identified over six thousand such user profiles, we observed that the creation of such profiles surged considerably right after the blasts occurred. We identified closed community structure and star formation in the interaction network of these suspended profiles amongst themselves

    The celestial city brought down to earth : Dmitrij CernĂĄkov's interpretation of Rimskij-Korsakov's opera the invisible city of KiteĆŸ and the Maiden Fevronia

    Get PDF
    Rimsky-Korsakov's penultimate opera is tied to the symbolist movement in fin de siĂšcle Russia. Although the composer did not intend it as a mystical work in the wake of Wagner's Parsifal, his symbolist collaborators invested much symbolist thinking in the production. In contemporary reception, however, the eschatological focus of the work is hard to adjust to the expectations of modern audiences. This paper discusses the solutions elaborated by the Russian stage director Dimitry Tcherniakov in his production for the National Opera in Amsterdam. Tcherniakov succeeded to manipulate the work into the direction of a social and psychological drama. This paper analyses the strategies he employed to arrive at such a significant change of meaning without violating its spiritual tone

    Who Started This Rumor? Quantifying the Natural Differential Privacy of Gossip Protocols

    Get PDF
    Gossip protocols (also called rumor spreading or epidemic protocols) are widely used to disseminate information in massive peer-to-peer networks. These protocols are often claimed to guarantee privacy because of the uncertainty they introduce on the node that started the dissemination. But is that claim really true? Can the source of a gossip safely hide in the crowd? This paper examines, for the first time, gossip protocols through a rigorous mathematical framework based on differential privacy to determine the extent to which the source of a gossip can be traceable. Considering the case of a complete graph in which a subset of the nodes are curious, we study a family of gossip protocols parameterized by a "muting" parameter s: nodes stop emitting after each communication with a fixed probability 1-s. We first prove that the standard push protocol, corresponding to the case s = 1, does not satisfy differential privacy for large graphs. In contrast, the protocol with s = 0 (nodes forward only once) achieves optimal privacy guarantees but at the cost of a drastic increase in the spreading time compared to standard push, revealing an interesting tension between privacy and spreading time. Yet, surprisingly, we show that some choices of the muting parameter s lead to protocols that achieve an optimal order of magnitude in both privacy and speed. Privacy guarantees are obtained by showing that only a small fraction of the possible observations by curious nodes have different probabilities when two different nodes start the gossip, since the source node rapidly stops emitting when s is small. The speed is established by analyzing the mean dynamics of the protocol, and leveraging concentration inequalities to bound the deviations from this mean behavior. We also confirm empirically that, with appropriate choices of s, we indeed obtain protocols that are very robust against concrete source location attacks (such as maximum a posteriori estimates) while spreading the information almost as fast as the standard (and non-private) push protocol

    Online Misinformation: Challenges and Future Directions

    Get PDF
    Misinformation has become a common part of our digital media environments and it is compromising the ability of our societies to form informed opinions. It generates misperceptions, which have affected the decision making processes in many domains, including economy, health, environment, and elections, among others. Misinformation and its generation, propagation, impact, and management is being studied through a variety of lenses (computer science, social science, journalism, psychology, etc.) since it widely affects multiple aspects of society. In this paper we analyse the phenomenon of misinformation from a technological point of view.We study the current socio-technical advancements towards addressing the problem, identify some of the key limitations of current technologies, and propose some ideas to target such limitations. The goal of this position paper is to reflect on the current state of the art and to stimulate discussions on the future design and development of algorithms, methodologies, and applications
    • 

    corecore