37 research outputs found

    Processing political misinformation:comprehending the Trump phenomenon

    Get PDF
    his study investigated the cognitive processing of true and false political information. Specifically, it examined the impact of source credibility on the assessment of veracity when information comes from a polarizing source (Experiment 1), and effectiveness of explanations when they come from one's own political party or an opposition party (Experiment 2). These experiments were conducted prior to the 2016 Presidential election. Participants rated their belief in factual and incorrect statements that President Trump made on the campaign trail; facts were subsequently affirmed and misinformation retracted. Participants then re-rated their belief immediately or after a delay. Experiment 1 found that (i) if information was attributed to Trump, Republican supporters of Trump believed it more than if it was presented without attribution, whereas the opposite was true for Democrats and (ii) although Trump supporters reduced their belief in misinformation items following a correction, they did not change their voting preferences. Experiment 2 revealed that the explanation's source had relatively little impact, and belief updating was more influenced by perceived credibility of the individual initially purporting the information. These findings suggest that people use political figures as a heuristic to guide evaluation of what is true or false, yet do not necessarily insist on veracity as a prerequisite for supporting political candidates

    The role of familiarity in correcting inaccurate information

    Get PDF

    Correction Format has a Limited Role when Debunking Misinformation

    No full text

    Discrediting disinformation source: Experiment 2

    No full text
    This study aims to examine how disreputable sources can be discredited, it is a follow up experiment to “Discrediting disinformation sources”. We will examine how people evaluate misinformation from a source, after finding out that they have spread misinformation previously, are not qualified to give accurate information, or both. All participants will read a description of a self-proclaimed cancer expert. This first description should leave participants with the impression that they are a reputable source. Participants will be asked to rate this individual's credibility (general credibility, trustworthiness, and expertise), and feelings towards this person. Next, participants will be exposed to one of four conditions: (1) misinformation said by the source will be corrected, (2) expertise will be revealed to be insufficient to give accurate information, (3) a combined misinformation correction and low-expertise condition, or (4) a no-intervention control. We will then re-test participants' perceived source credibility and feelings towards the source. We will also test belief in new misinformation from this source, and how this interacts with participants’ complementary and alternative medicine beliefs, relationship with cancer, and health literacy

    Examining the replicability of backfire effects after standalone corrections

    No full text
    Corrections are a frequently used and effective tool for countering misinformation. However, concerns have been raised that corrections may introduce false claims to new audiences when the misinformation is novel. This is because boosting the familiarity of a claim can increase belief in that claim, and thus exposing new audiences to novel misinformation—even as part of a correction—may inadvertently increase misinformation belief. Such an outcome could be conceptualized as a familiarity backfire effect, whereby a familiarity boost increases false-claim endorsement above a control-condition or pre-correction baseline. Here, we examined whether standalone corrections—that is, corrections presented without initial misinformation exposure—can backfire and increase participants’ reliance on the misinformation in their subsequent inferential reasoning, relative to a no-misinformation, no-correction control condition. Across three experiments (total N = 1156) we found that standalone corrections did not backfire immediately (Experiment 1) or after a one-week delay (Experiment 2). However, there was some mixed evidence suggesting corrections may backfire when there is skepticism regarding the correction (Experiment 3). Specifically, in Experiment 3, we found the standalone correction to backfire in open-ended responses, but only when there was skepticism towards the correction. However, this did not replicate with the rating scales measure. Future research should further examine whether skepticism towards the correction is the first replicable mechanism for backfire effects to occur

    Memory failure predicts belief regression after the correction of misinformation

    No full text
    After misinformation has been corrected people initially update their belief extremely well. However, this change is rarely sustained over time, with belief returning towards pre-correction levels. This is called belief regression. The current study aimed to examine the association between memory for the correction and belief regression, and whether corrected misinformation suffers from belief regression more than affirmed facts. Participants from Prolific Academic (N=612) rated the veracity of 16 misinformation and 16 factual items and were randomly assigned to a correction condition or test-retest control. Immediately after misinformation was corrected and facts affirmed, participants re-rated their belief and were asked whether they could remember the items’ presented veracity. Participants repeated this post-test one month later. We found that belief and memory were highly associated, both immediately (⍴=.51), and after one month (⍴=.82), and memory explained 66% of the variance in belief regression after correcting for measurement reliability. We found the rate of dissenting (accurately remembering that misinformation was presented as false but still believing it) remained stable between the immediate and delayed post-test, while the rate of forgetting quadrupled. After one month, 57% of participants who believed in the misinformation thought that the items were presented to them as true. Belief regression was more pronounced for misinformation than facts, but this was greatly attenuated once pre-test belief was equated. Together, these results clearly indicate that memory plays a fundamental role in belief regression, and that repeated corrections could be an effective method to counteract this phenomenon

    Discrediting Disinformation Sources: Experiment_2

    No full text
    This study aims to examine how disreputable sources can be discredited, it is a follow up experiment to “Discrediting disinformation sources”. We will examine how people evaluate misinformation from a source, after finding out that they have spread misinformation previously, are not qualified to give accurate information, or both. All participants will read a description of a self-proclaimed cancer expert. This first description should leave participants with the impression that they are a reputable source. Participants will be asked to rate this individual's credibility (general credibility, trustworthiness, and expertise), and feelings towards this person. Next, participants will be exposed to one of four conditions: (1) misinformation said by the source will be corrected, (2) expertise will be revealed to be insufficient to give accurate information, (3) a combined misinformation correction and low-expertise condition, or (4) a no-intervention control. We will then re-test participants' perceived source credibility and feelings towards the source. We will also test belief in new misinformation from this source, and how this interacts with participants’ complementary and alternative medicine beliefs, relationship with cancer, and health literacy

    Correcting vaccine misinformation: A failure to replicate familiarity or fear-driven backfire effects.

    No full text
    Individuals often continue to rely on misinformation in their reasoning and decision making even after it has been corrected. This is known as the continued influence effect, and one of its presumed drivers is misinformation familiarity. As continued influence can promote misguided or unsafe behaviours, it is important to find ways to minimize the effect by designing more effective corrections. It has been argued that correction effectiveness is reduced if the correction repeats the to-be-debunked misinformation, thereby boosting its familiarity. Some have even suggested that this familiarity boost may cause a correction to inadvertently increase subsequent misinformation reliance; a phenomenon termed the familiarity backfire effect. A study by Pluviano et al. (2017) found evidence for this phenomenon using vaccine-related stimuli. The authors found that repeating vaccine "myths" and contrasting them with corresponding facts backfired relative to a control condition, ironically increasing false vaccine beliefs. The present study sought to replicate and extend this study. We included four conditions from the original Pluviano et al. study: the myths vs. facts, a visual infographic, a fear appeal, and a control condition. The present study also added a "myths-only" condition, which simply repeated false claims and labelled them as false; theoretically, this condition should be most likely to produce familiarity backfire. Participants received vaccine-myth corrections and were tested immediately post-correction, and again after a seven-day delay. We found that the myths vs. facts condition reduced vaccine misconceptions. None of the conditions increased vaccine misconceptions relative to control at either timepoint, or relative to a pre-intervention baseline; thus, no backfire effects were observed. This failure to replicate adds to the mounting evidence against familiarity backfire effects and has implications for vaccination communications and the design of debunking interventions
    corecore