6 research outputs found
Recommended from our members
No evidence for systematic voter fraud: A guide to statistical claims about the 2020 election
After the 2020 US presidential election Donald Trump refused to concede, alleging widespread and unparalleled voter fraud. Trump's supporters deployed several statistical arguments in an attempt to cast doubt on the result. Reviewing the most prominent of these statistical claims, we conclude that none of them is even remotely convincing. The common logic behind these claims is that, if the election were fairly conducted, some feature of the observed 2020 election result would be unlikely or impossible. In each case, we find that the purportedly anomalous fact is either not a fact or not anomalous. © 2021 National Academy of Sciences. All rights reserved
Replication Data for: Quid Pro Quo? Corporate Returns to Campaign Contributions
These files replicate the results in “Quid Pro Quo? Corporate Returns to Campaign Contributions” by Anthony Fowler, Haritz Garro, and Jörg L. Spenkuch
Reducing misinformation sharing at scale using digital accuracy prompt ads
Interventions to reduce misinformation sharing have been a major focus in recent years. Developing “content-neutral” interventions that do not require specific fact-checks or warnings related to individual false claims is particularly important in developing scalable solutions. Here, we provide the first evaluations of a content-neutral intervention to reduce misinformation sharing conducted at scale in the field. Specifically, across two on-platform randomized controlled trials, one on Meta’s Facebook (N=33,043,471) and the other on Twitter (N=75,763), we find that simple messages reminding people to think about accuracy—delivered to large numbers of users using digital advertisements—reduce misinformation sharing, with effect sizes on par with what is typically observed in digital advertising experiments. On Facebook, in the hour after receiving an accuracy prompt ad, we found a 2.6% reduction in the probability of being a misinformation sharer among users who had shared misinformation the week prior to the experiment. On Twitter, over more than a week of receiving 3 accuracy prompt ads per day, we similarly found a 3.7% to 6.3% decrease in the probability of sharing low-quality content among active users who shared misinformation pre-treatment. These findings suggest that content-neutral interventions that prompt users to consider accuracy have the potential to complement existing content-specific interventions in reducing the spread of misinformation online