35 research outputs found

    Misinformation interventions decay rapidly without an immediate posttest

    Get PDF
    In recent years, many kinds of interventions have been developed that seek to reduce susceptibility to misinformation. In two preregistered longitudinal studies (N1 = 503, N2 = 673), we leverage two previously validated “inoculation” interventions (a video and a game) to address two important questions in misinformation interventions research: (1) whether displaying additional stimuli (such as videos unrelated to misinformation) alongside an intervention interferes with its effectiveness, and (2) whether administering an immediate posttest (in the form of a social media post evaluation task after the intervention) plays a role in the longevity of the intervention. We find no evidence that other stimuli interfere with intervention efficacy, but strong evidence that immediate posttests strengthen the learnings from the intervention. In study 1, we find that 48 h after watching a video, participants who received an immediate posttest continued to be significantly better at discerning untrustworthy social media posts from neutral ones than the control group (d = 0.416, p = .007), whereas participants who only received a posttest 48 h later showed no differences with a control (d = 0.010, p = .854). In study 2, we observe highly similar results for a gamified intervention, and provide evidence for a causal mechanism: immediate posttests help strengthen people's memory of the lessons learned in the intervention. We argue that the active rehearsal and application of relevant information are therefore requirements for the longevity of learning‐based misinformation interventions, which has substantial implications for their scalability

    Misleading But Not Fake: Measuring the Difference Between Manipulativeness Discernment and Veracity Discernment Using Psychometrically Validated Tests

    No full text
    Misinformation continues to pose a substantial societal problem, but the measurement of misinformation susceptibility has often been done using non-validated tests. Furthermore, research shows that misleading content (implied misinformation) is much more common than outright false content (explicit misinformation). However, there is very little research on the predictors of belief in implied misinformation, and it is unknown if susceptibility to direct and implied misinformation are psychologically similar. To address these questions, we ran three preregistered studies (N 1 = 487, N 2 = 547, N 3 = 490) in which we developed and validated the 24-item and 12-item Manipulative Online Content Recognition Inventory (MOCRI), a test that measures a person’s ability to distinguish between misleading and neutral content. This test substantially outperforms other known predictors of misinformation susceptibility in terms of its predictive value for people’s ability to correctly identify many kinds of misleading content. We also show that susceptibility to misleading and false content are psychologically different from one another, although they are related. Finally, we show that people who score high on the MOCRI are much better than low MOCRI performers at discerning manipulative from non-manipulative statements (i.e., they have better “discernment”), but that this ability does not necessarily translate to better discernment in the quality of their sharing decisions, or their willingness to reply to manipulative vs. non-manipulative messages. Instead, people who are more resilient to being manipulated are less likely to share and respond to both manipulative and non-manipulative content
    corecore