677 research outputs found

    Spurious Also? Name-Similarity Effects (Implicit Egotism) in Employment Decisions

    Get PDF
    Implicit egotism is the notion that major life decisions are influenced by name-similarity. This paper revisits the evidence for the most systematic test of this hypothesis. Anseel & Duyck (2008) analyzed data from 1/3 of all Belgian employees and found that a disproportionate fraction of them shared their initial with their employer. Using a data set with American employees I replicate the finding, but new analyses strongly suggest they are due to reverse causality, whereby the documented effect seems to be driven by people naming companies they start after themselves rather than by employees seeking out companies they have a shared initial with. Walt Disney, for example, worked for a company starting with D (Disney World) not because of an unconscious attraction to such letter, but because the company was named after him

    Direct Risk Aversion Evidence From Risky Prospects Valued Below Their Worst Outcome

    Get PDF
    Why would people pay more for a 50giftcertificatethanfortheopportunitytoreceiveagiftcertificatewortheither50 gift certificate than for the opportunity to receive a gift certificate worth either 50 or $100, with equal probability? This article examines three possible mechanisms for this recently documented uncertainty effect (UE): First, awareness of the better outcome may devalue the worse one. Second, the UE may have arisen in the original demonstration of this effect because participants misunderstood the instructions. Third, the UE may be due to direct risk aversion, that is, actual distaste for uncertainty. In Experiment 1, the UE was observed even though participants in the certainty condition were also aware of the better outcome; this result eliminates the first explanation. Experiment 2 shows that most participants understand the instructions used in the original study and that the UE is not caused by the few who do not. Overall, the experiments demonstrate that the UE is robust, large (prospects are valued at 65% of the value of the worse outcome), and widespread (at least 62% of participants exhibit it)

    Clouds Make Nerds Look Good: Field Evidence of the Impact of Incidental Factors on Decision Making

    Get PDF
    Abundant experimental research has documented that incidental primes and emotions are capable of influencing people\u27s judgments and choices. This paper examines whether the influence of such incidental factors is large enough to be observable in the field, by analyzing 682 actual university admission decisions. As predicted, applicants\u27 academic attributes are weighted more heavily on cloudier days and non‐academic attributes on sunnier days. The documented effects are of both statistical and practical significance: changes in cloud cover can increase a candidate\u27s predicted probability of admission by an average of up to 11.9%. These results also shed light on the causes behind the long demonstrated unreliability of experts making repeated judgments from the same data

    Spurious? Name Similarity Effects (Implicit Egotism) in Marriage, Job, and Moving Decisions

    Get PDF
    Three articles published in the Journal of Personality and Social Psychology have shown that a disproportionate share of people choose spouses, places to live, and occupations with names similar to their own. These findings, interpreted as evidence of implicit egotism, are included in most modern social psychology textbooks and many university courses. The current article successfully replicates the original findings but shows that they are most likely caused by a combination of cohort, geographic, and ethnic confounds as well as reverse causality

    eBay\u27s Crowded Evenings: Competition Neglect in Market Entry Decisions

    Get PDF
    Do firms neglect competition when making entry decisions? This paper addresses this question analyzing the time of day at which eBay sellers set their auctions to end. Consistent with competition neglect, it is found that (i) a disproportionate share of auctions end during peak bidding hours, (ii) such hours exhibit lower selling rates and prices, and (iii) peak listing is more prevalent among sellers likely to have chosen ending time strategically, suggesting disproportionate entry is a mistake driven by bounded rationality rather than mindlessness. The results highlight the importance for marketing researchers of assessing rather than assuming the rationality of firm behavior

    Power Posing: P-Curving the Evidence

    Get PDF
    In a well-known article, Carney, Cuddy, and Yap (2010) documented the benefits of “power posing”. In their study, participants (N=42) who were randomly assigned to briefly adopt expansive, powerful postures sought more risk, had higher testosterone levels, and had lower cortisol levels than those assigned to adopt contractive, powerless postures. In their response to a failed replication by Ranehill et al. (2015), Carney, Cuddy, and Yap (2015) reviewed 33 successful studies investigating the effects of expansive vs. contractive posing, focusing on differences between these studies and the failed replication, to identify possible moderators that future studies could explore. But before spending valuable resources on that, it is useful to establish whether the literature that Carney et al. (2015) cited actually suggests that power posing is effective. In this paper we rely on p-curve analysis to answer the following question: Does the literature reviewed by Carney et al. (2015) suggest the existence of an effect once we account for selective reporting? We conclude not. The distribution of p-values from those 33 studies is indistinguishable from what is expected if (1) the average effect size were zero, and (2) selective reporting (of studies and/or analyses) were solely responsible for the significant effects that are published. Although more highly powered future research may find replicable evidence for the purported benefits of power posing (or unexpected detriments), the existing evidence is too weak to justify a search for moderators or to advocate for people to engage in power posing to better their lives

    Specification Curve: Descriptive and Inferential Statistics on All Reasonable Specifications

    Get PDF
    Empirical results often hinge on data analytic decisions that are simultaneously defensible, arbitrary, and motivated. To mitigate this problem we introduce Specification-Curve Analysis, which consists of three steps: (i) identifying the set of theoretically justified, statistically valid, and non-redundant analytic specifications, (ii) displaying alternative results graphically, allowing the identification of decisions producing different results, and (iii) conducting statistical tests to determine whether as a whole results are inconsistent with the null hypothesis. We illustrate its use by applying it to three published findings. One proves robust, one weak, one not robust at all

    p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results

    Get PDF
    Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the “choice overload” literature

    p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results

    Get PDF
    Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the “choice overload” literature
    • …
    corecore