446 research outputs found
Can corrections spread misinformation to new audiences?:Testing for the elusive familiarity backfire effect
Misinformation often continues to influence inferential reasoning after clear and credible corrections are provided; this effect is known as the continued influence effect. It has been theorized that this effect is partly driven by misinformation familiarity. Some researchers have even argued that a correction should avoid repeating the misinformation, as the correction itself could serve to inadvertently enhance misinformation familiarity and may thus backfire, ironically strengthening the very misconception it aims to correct. While previous research has found little evidence of such familiarity backfire effects, there remains one situation where they may yet arise: when correcting entirely novel misinformation, where corrections could serve to spread misinformation to new audiences. The present paper presents three experiments (total N = 1,718) investigating the possibility of familiarity backfire within the context of correcting novel misinformation claims. While there was variation across experiments, overall there was substantial evidence against familiarity backfire. Corrections that repeated novel misinformation claims did not lead to stronger misconceptions compared to a control group never exposed to the false claims or corrections. This suggests that it is safe to repeat misinformation when correcting it, even when the audience might be unfamiliar with the misinformation
Using the president’s tweets to understand political diversion in the age of social media
(This paper is in press, Nature Communications). Social media has arguably shifted political agenda-setting power away from mainstream media onto politicians. Current U.S. President Trump's reliance on Twitter is unprecedented, but the underlying implications for agenda setting are poorly understood. Using the president as a case study, we present evidence suggesting that President Trump's use of Twitter diverts crucial media (The New York Times and ABC News) from topics that are potentially harmful to him. We find that increased media coverage of the Mueller investigation is immediately followed by Trump tweeting increasingly about unrelated issues. This increased activity, in turn, is followed by a reduction in coverage of the Mueller investigation---a finding that is consistent with the hypothesis that President Trump's tweets may also successfully divert the media from topics that he considers threatening. The pattern is absent in placebo analyses involving Brexit coverage and several other topics that do not present a political risk to the president. Our results are robust to the inclusion of numerous control variables and examination of several alternative explanations, although the generality of the successful diversion must be established by further investigation
Processing political misinformation:comprehending the Trump phenomenon
his study investigated the cognitive processing of true and false political information. Specifically, it examined the impact of source credibility on the assessment of veracity when information comes from a polarizing source (Experiment 1), and effectiveness of explanations when they come from one's own political party or an opposition party (Experiment 2). These experiments were conducted prior to the 2016 Presidential election. Participants rated their belief in factual and incorrect statements that President Trump made on the campaign trail; facts were subsequently affirmed and misinformation retracted. Participants then re-rated their belief immediately or after a delay. Experiment 1 found that (i) if information was attributed to Trump, Republican supporters of Trump believed it more than if it was presented without attribution, whereas the opposite was true for Democrats and (ii) although Trump supporters reduced their belief in misinformation items following a correction, they did not change their voting preferences. Experiment 2 revealed that the explanation's source had relatively little impact, and belief updating was more influenced by perceived credibility of the individual initially purporting the information. These findings suggest that people use political figures as a heuristic to guide evaluation of what is true or false, yet do not necessarily insist on veracity as a prerequisite for supporting political candidates
Attention and working memory capacity: insights from blocking, highlighting, and knowledge restructuring
The concept of attention is central to theorizing in learning as well as in working memory. However, research to date has yet to establish how attention as construed in one domain maps onto the other. We investigate two manifestations of attention in category- and cue-learning to examine whether they might provide common ground between learning and working memory. Experiment 1 examined blocking and highlighting effects in an associative learning paradigm, which are widely thought to be attentionally mediated. No relationship between attentional performance indicators and working memory capacity (WMC) was observed, despite the fact that WMC was strongly associated with overall learning performance. Experiment 2 used a knowledge restructuring paradigm, which is known to require recoordination of partial category knowledge using representational attention. We found that the extent to which people successfully recoordinated their knowledge was related to WMC. The results illustrate a link between WMC and representational— but not dimensional—attention in category learning
Influence and seepage:An evidence-resistant minority can affect public opinion and scientific belief formation
Some well-established scientific findings may be rejected by vocal minorities because the evidence is in conflict with political views or economic interests. For example, the tobacco industry denied the medical consensus on the harms of smoking for decades, and the clear evidence about human-caused climate change is currently being rejected by many politicians and think tanks that oppose regulatory action. We present an agent-based model of the processes by which denial of climate change can occur, how opinions that run counter to the evidence can affect the scientific community, and how denial can alter the public discourse. The model involves an ensemble of Bayesian agents, representing the scientific community, that are presented with the emerging historical evidence of climate change and that also communicate the evidence to each other. Over time, the scientific community comes to agreement that the climate is changing. When a minority of agents is introduced that is resistant to the evidence, but that enter into the scientific discussion, the simulated scientific community still acquires firm knowledge but consensus formation is delayed. When both types of agents are communicating with the general public, the public remains ambivalent about the reality of climate change. The model captures essential aspects of the actual evolution of scientific and public opinion during the last 4 decades
- …