230 research outputs found

    Moniker Maladies When Names Sabotage Success

    Get PDF
    In five studies, we found that people like their names enough to unconsciously pursue consciously avoided outcomes that resemble their names. Baseball players avoid strikeouts, but players whose names begin with the strikeout-signifying letter K strike out more than others (Study 1). All students want As, but students whose names begin with letters associated with poorer performance (C and D) achieve lower grade point averages (GPAs) than do students whose names begin with A and B (Study 2), especially if they like their initials (Study 3). Because lower GPAs lead to lesser graduate schools, students whose names begin with the letters C and D attend lower-ranked law schools than students whose names begin with A and B (Study 4). Finally, in an experimental study, we manipulated congruence between participants\u27 initials and the labels of prizes and found that participants solve fewer anagrams when a consolation prize shares their first initial than when it does not (Study 5). These findings provide striking evidence that unconsciously desiring negative name-resembling performance outcomes can insidiously undermine the more conscious pursuit of positive outcomes

    Better P-curves: Making P-Curve Analysis More Robust to Errors, Fraud, and Ambitious P-Hacking, a Reply to Ulrich and Miller

    Get PDF
    When studies examine true effects, they generate right-skewed p-curves, distributions of statistically significant results with more low (.01 s) than high (.04 s) p values. What else can cause a right-skewed p-curve? First, we consider the possibility that researchers report only the smallest significant pvalue (as conjectured by Ulrich & Miller, 2015), concluding that it is a very uncommon problem. We then consider more common problems, including (a) p-curvers selecting the wrong p values, (b) fake data, (c) honest errors, and (d) ambitiously p-hacked (beyond p \u3c .05) results. We evaluate the impact of these common problems on the validity of p-curve analysis, and provide practical solutions that substantially increase its robustness

    P-curve: A Key to The File Drawer

    Get PDF
    Because scientists tend to report only studies (publication bias) or analyses (p-hacking) that “work,” readers must ask, “Are these effects true, or do they merely reflect selective reporting?” We introduce p-curve as a way to answer this question. P-curve is the distribution of statistically significant p values for a set of studies (ps .05). Because only true effects are expected to generate right-skewed p-curves— containing more low (.01s) than high (.04s) significant p values— only right-skewed p-curves are diagnostic of evidential value. By telling us whether we can rule out selective reporting as the sole explanation for a set of findings, p-curve offers a solution to the age-old inferential problems caused by file-drawers of failed studies and analyses

    Specification Curve: Descriptive and Inferential Statistics on All Reasonable Specifications

    Get PDF
    Empirical results often hinge on data analytic decisions that are simultaneously defensible, arbitrary, and motivated. To mitigate this problem we introduce Specification-Curve Analysis, which consists of three steps: (i) identifying the set of theoretically justified, statistically valid, and non-redundant analytic specifications, (ii) displaying alternative results graphically, allowing the identification of decisions producing different results, and (iii) conducting statistical tests to determine whether as a whole results are inconsistent with the null hypothesis. We illustrate its use by applying it to three published findings. One proves robust, one weak, one not robust at all

    p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results

    Get PDF
    Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the “choice overload” literature

    p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results

    Get PDF
    Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the “choice overload” literature

    The Effect of Accuracy Motivation on Anchoring and Adjustment: Do People Adjust from Provided Anchors?

    Get PDF
    Increasing accuracy motivation (e.g., by providing monetary incentives for accuracy) often fails to increase adjustment away from provided anchors, a result that has led researchers to conclude that people do not effortfully adjust away from such anchors. We challenge this conclusion. First, we show that people are typically uncertain about which way to adjust from provided anchors and that this uncertainty often causes people to believe that they have initially adjusted too far away from such anchors (Studies 1a and 1b). Then, we show that although accuracy motivation fails to increase the gap between anchors and final estimates when people are uncertain about the direction of adjustment, accuracy motivation does increase anchor–estimate gaps when people are certain about the direction of adjustment, and that this is true regardless of whether the anchors are provided or self-generated (Studies 2, 3a, 3b, and 5). These results suggest that people do effortfully adjust away from provided anchors but that uncertainty about the direction of adjustment makes that adjustment harder to detect than previously assumed. This conclusion has important theoretical implications, suggesting that currently emphasized distinctions between anchor types (self-generated vs. provided) are not fundamental and that ostensibly competing theories of anchoring (selective accessibility and anchoring-and-adjustment) are complementary

    Correcting the Past: Failures to Replicate Psi

    Get PDF
    Across 7 experiments (N = 3,289), we replicate the procedure of Experiments 8 and 9 from Bem (2011), which had originally demonstrated retroactive facilitation of recall. We failed to replicate that finding. We further conduct a meta-analysis of all replication attempts of these experiments and find that the average effect size (d = 0.04) is no different from 0. We discuss some reasons for differences between the results in this article and those presented in Bem (2011)

    When Advertisements Improve Television

    Get PDF
    Though they have trouble predicting it, people adapt to most positive experiences. Consequently, an experience with a marvelous start can have a mild ending. If the experience is disrupted, however, the intensity can be prolonged, making the experience more enjoyable. Four studies found support for the hypothesis that disrupting television programs can make these programs more enjoyable. Although consumers thought that advertising disruptions would be aversive, they actually made the program more enjoyable to watch (Study 1). Subsequent studies showed that this was not due to evaluative contrast effects (Study 2) or the mere presence of advertisements (Study 3), and in fact could emerge with non-advertising disruptions (Study 4)

    Is It About Giving Or Receiving? the Determinants of Kindness and Happiness in Paying It Forward

    Get PDF
    Three studies examined two forces behind paying-it-forward: reciprocation and generosity. In the absence of direct social pressure, generosity had a stronger influence on behavior than reciprocation. However, giving did not make people feel happier than receiving a kind act. Gift-givers and receivers displayed asymmetric beliefs about their and others' happiness
    • …
    corecore