166,019 research outputs found

    Reduction of the Misinformation Effect by Arousal Induced After Learning

    Get PDF
    Misinformation introduced after events have already occurred causes errors in later retrieval. Based on literature showing that arousal induced after learning enhances delayed retrieval, we investigated whether post-learning arousal can reduce the misinformation effect. 251 participants viewed four short film clips, each followed by a retention test, which for some participants included misinformation. Afterward, participants viewed another film clip that was either arousing or neutral. One week later, the arousal group recognized significantly more veridical details and endorsed significantly fewer misinformation items than the neutral group. The findings suggest that arousal induced after learning reduced source confusion, allowing participants to better retrieve accurate details and to better reject misinformation

    Regulating Misinformation

    Get PDF
    The government has responded to misleading advertising by banning it, engaging in counter-advertising and taxing the product. In this paper, we consider the social welfare effects of those different responses to misinformation. While misinformation lowers consumer surplus, its effect on social welfare is ambiguous. Misleading advertising leads to overconsumption but that may be offsetting the under-consumption associated with monopoly prices. If all advertising is misinformation then a tax or quantity restriction on advertising maximizes social welfare. Other policy interventions are inferior and cannot improve on a pure advertising tax. If it is impossible to tax misleading information without also taxing utility increasing advertising, then combining taxes or bans on advertising with other policies can increase welfare.

    False memory ≠ false memory: DRM errors are unrelated to the misinformation effect

    Get PDF
    The DRM method has proved to be a popular and powerful, if controversial, way to study 'false memories'. One reason for the controversy is that the extent to which the DRM effect generalises to other kinds of memory error has been neither satisfactorily established nor subject to much empirical attention. In the present paper we contribute data to this ongoing debate. One hundred and twenty participants took part in a standard misinformation effect experiment, in which they watched some CCTV footage, were exposed to misleading post-event information about events depicted in the footage, and then completed free recall and recognition tests. Participants also completed a DRM test as an ostensibly unrelated filler task. Despite obtaining robust misinformation and DRM effects, there were no correlations between a broad range of misinformation and DRM effect measures (mean r  = -.01). This was not due to reliability issues with our measures or a lack of power. Thus DRM 'false memories' and misinformation effect 'false memories' do not appear to be equivalent

    When Government Misleads US: Sending Misinformation as Protectionist Devices

    Get PDF
    In this paper, we examine the incentive of the home government to mislead home consumers by sending misinformation. We nd that positive misinformation on home products and negative misinformation on foreign products always increases the prot of the home rm, while when the marginal costs of home and foreign rms are the same, a small amount of positive misinformation decreases the consumer surplus. Moreover, when the home government maximizes home welfare, it chooses to send positive misinformation on the home product and negative misinformation on the foreign product. The stronger is the competition faced by the home rm, the greater is the amount of negative misinformation on the foreign product. By contrast, the optimal amount of misinformation on each product used to maximize world welfare is positive. We also demonstrate that trade liberalization can increases the incentive of the home government to send misinformation.Strategic misleading, misinformation, non-tari trade policies

    The Fake News Spreading Plague: Was it Preventable?

    Get PDF
    In 2010, a paper entitled "From Obscurity to Prominence in Minutes: Political Speech and Real-time search" won the Best Paper Prize of the Web Science 2010 Conference. Among its findings were the discovery and documentation of what was termed a "Twitter-bomb", an organized effort to spread misinformation about the democratic candidate Martha Coakley through anonymous Twitter accounts. In this paper, after summarizing the details of that event, we outline the recipe of how social networks are used to spread misinformation. One of the most important steps in such a recipe is the "infiltration" of a community of users who are already engaged in conversations about a topic, to use them as organic spreaders of misinformation in their extended subnetworks. Then, we take this misinformation spreading recipe and indicate how it was successfully used to spread fake news during the 2016 U.S. Presidential Election. The main differences between the scenarios are the use of Facebook instead of Twitter, and the respective motivations (in 2010: political influence; in 2016: financial benefit through online advertising). After situating these events in the broader context of exploiting the Web, we seize this opportunity to address limitations of the reach of research findings and to start a conversation about how communities of researchers can increase their impact on real-world societal issues

    Hoaxy: A Platform for Tracking Online Misinformation

    Full text link
    Massive amounts of misinformation have been observed to spread in uncontrolled fashion across social media. Examples include rumors, hoaxes, fake news, and conspiracy theories. At the same time, several journalistic organizations devote significant efforts to high-quality fact checking of online claims. The resulting information cascades contain instances of both accurate and inaccurate information, unfold over multiple time scales, and often reach audiences of considerable size. All these factors pose challenges for the study of the social dynamics of online news sharing. Here we introduce Hoaxy, a platform for the collection, detection, and analysis of online misinformation and its related fact-checking efforts. We discuss the design of the platform and present a preliminary analysis of a sample of public tweets containing both fake news and fact checking. We find that, in the aggregate, the sharing of fact-checking content typically lags that of misinformation by 10--20 hours. Moreover, fake news are dominated by very active users, while fact checking is a more grass-roots activity. With the increasing risks connected to massive online misinformation, social news observatories have the potential to help researchers, journalists, and the general public understand the dynamics of real and fake news sharing.Comment: 6 pages, 6 figures, submitted to Third Workshop on Social News On the We
    corecore