675 research outputs found

    Power Posing: P-Curving the Evidence

    Get PDF
    In a well-known article, Carney, Cuddy, and Yap (2010) documented the benefits of “power posing”. In their study, participants (N=42) who were randomly assigned to briefly adopt expansive, powerful postures sought more risk, had higher testosterone levels, and had lower cortisol levels than those assigned to adopt contractive, powerless postures. In their response to a failed replication by Ranehill et al. (2015), Carney, Cuddy, and Yap (2015) reviewed 33 successful studies investigating the effects of expansive vs. contractive posing, focusing on differences between these studies and the failed replication, to identify possible moderators that future studies could explore. But before spending valuable resources on that, it is useful to establish whether the literature that Carney et al. (2015) cited actually suggests that power posing is effective. In this paper we rely on p-curve analysis to answer the following question: Does the literature reviewed by Carney et al. (2015) suggest the existence of an effect once we account for selective reporting? We conclude not. The distribution of p-values from those 33 studies is indistinguishable from what is expected if (1) the average effect size were zero, and (2) selective reporting (of studies and/or analyses) were solely responsible for the significant effects that are published. Although more highly powered future research may find replicable evidence for the purported benefits of power posing (or unexpected detriments), the existing evidence is too weak to justify a search for moderators or to advocate for people to engage in power posing to better their lives

    Moniker Maladies When Names Sabotage Success

    Get PDF
    In five studies, we found that people like their names enough to unconsciously pursue consciously avoided outcomes that resemble their names. Baseball players avoid strikeouts, but players whose names begin with the strikeout-signifying letter K strike out more than others (Study 1). All students want As, but students whose names begin with letters associated with poorer performance (C and D) achieve lower grade point averages (GPAs) than do students whose names begin with A and B (Study 2), especially if they like their initials (Study 3). Because lower GPAs lead to lesser graduate schools, students whose names begin with the letters C and D attend lower-ranked law schools than students whose names begin with A and B (Study 4). Finally, in an experimental study, we manipulated congruence between participants\u27 initials and the labels of prizes and found that participants solve fewer anagrams when a consolation prize shares their first initial than when it does not (Study 5). These findings provide striking evidence that unconsciously desiring negative name-resembling performance outcomes can insidiously undermine the more conscious pursuit of positive outcomes

    Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err

    Get PDF
    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human

    Overcoming Algorithm Aversion: People will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them

    Get PDF
    Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Stuides 1-3). In fact, our results suggest that participants\u27 preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants\u27 preference for modifiable was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3). This research suggests that one can reduce algorithm aversion by giving people some control—even a slight amount—over an imperfect algorithm\u27s forecast

    P-curve: A Key to The File Drawer

    Get PDF
    Because scientists tend to report only studies (publication bias) or analyses (p-hacking) that “work,” readers must ask, “Are these effects true, or do they merely reflect selective reporting?” We introduce p-curve as a way to answer this question. P-curve is the distribution of statistically significant p values for a set of studies (ps .05). Because only true effects are expected to generate right-skewed p-curves— containing more low (.01s) than high (.04s) significant p values— only right-skewed p-curves are diagnostic of evidential value. By telling us whether we can rule out selective reporting as the sole explanation for a set of findings, p-curve offers a solution to the age-old inferential problems caused by file-drawers of failed studies and analyses

    Better P-curves: Making P-Curve Analysis More Robust to Errors, Fraud, and Ambitious P-Hacking, a Reply to Ulrich and Miller

    Get PDF
    When studies examine true effects, they generate right-skewed p-curves, distributions of statistically significant results with more low (.01 s) than high (.04 s) p values. What else can cause a right-skewed p-curve? First, we consider the possibility that researchers report only the smallest significant pvalue (as conjectured by Ulrich & Miller, 2015), concluding that it is a very uncommon problem. We then consider more common problems, including (a) p-curvers selecting the wrong p values, (b) fake data, (c) honest errors, and (d) ambitiously p-hacked (beyond p \u3c .05) results. We evaluate the impact of these common problems on the validity of p-curve analysis, and provide practical solutions that substantially increase its robustness

    p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results

    Get PDF
    Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the “choice overload” literature

    p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results

    Get PDF
    Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the “choice overload” literature

    Hope Over Experience Desirability and the Persistence of Optimism

    Get PDF
    Many important decisions hinge on expectations of future outcomes. Decisions about health, investments, and relationships all depend on predictions of the future. These expectations are often optimistic: People frequently believe that their preferred outcomes are more likely than is merited. Yet it is unclear whether optimism persists with experience and, surprisingly, whether optimism is truly caused by desire. These are important questions because life’s most consequential decisions often feature both strong preferences and the opportunity to learn. We investigated these questions by collecting football predictions from National Football League fans during each week of the 2008 season. Despite accuracy incentives and extensive feedback, predictions about preferred teams remained optimistically biased through the entire season. Optimism was as strong after 4 months as it was after 4 weeks. We exploited variation in preferences and matchups to show that desirability fueled this optimistic bias

    Specification Curve: Descriptive and Inferential Statistics on All Reasonable Specifications

    Get PDF
    Empirical results often hinge on data analytic decisions that are simultaneously defensible, arbitrary, and motivated. To mitigate this problem we introduce Specification-Curve Analysis, which consists of three steps: (i) identifying the set of theoretically justified, statistically valid, and non-redundant analytic specifications, (ii) displaying alternative results graphically, allowing the identification of decisions producing different results, and (iii) conducting statistical tests to determine whether as a whole results are inconsistent with the null hypothesis. We illustrate its use by applying it to three published findings. One proves robust, one weak, one not robust at all
    • …
    corecore