82 research outputs found

    Why Did It Take So Many Decades for the Behavioral Sciences to Develop a Sense of Crisis Around Methodology and Replication?

    Get PDF
    For several decades, leading behavioral scientists have offered strong criticisms of the common practice of null hypothesis significance testing as producing spurious findings without strong theoretical or empirical support. But only in the past decade has this manifested as a full-scale replication crisis. We consider some possible reasons why, on or about December 2010, the behavioral sciences changed

    Perceiver Effects as Projective Tests: What Your Perceptions of Others Say about You

    Get PDF
    In three studies, we document various properties of perceiver effects—or how an individual generally tends to describe other people in a population. First, we document that perceiver effects have consistent relationships with dispositional characteristics of the perceiver, ranging from self-reported personality traits and academic performance to well-being and measures of personality disorders, to how liked the person is by peers. Second, we document that the covariation in perceiver effects among trait dimensions can be adequately captured by a single factor consisting of how positively others are seen across a wide range of traits (e.g., how nice, interesting, trustworthy, happy, and stable others are generally seen). Third, we estimate the one-year stability of perceiver effects and show that individual differences in the typical perception of others have a level of stability comparable to that of personality traits. The results provide compelling evidence that how individuals generally perceive others is a stable individual difference that reveals much about the perceiver’s own personality

    In Search of Our True Selves: Feedback as a Path to Self-Knowledge

    Get PDF
    How can self-knowledge of personality be improved? What path is the most fruitful source for learning about our true selves? Previous research has noted two main avenues for learning about the self: looking inward (e.g., introspection) and looking outward (e.g., feedback). Although most of the literature on these topics does not directly measure the accuracy of self-perceptions (i.e., self-knowledge), we review these paths and their potential for improving self-knowledge. We come to the conclusion that explicit feedback, a largely unexamined path, is likely a fruitful avenue for learning about one’s own personality. Specifically, we suggest that self-knowledge might be fully realized through the use of explicit feedback from close, knowledgeable others. As such, we conclude that the road to self-knowledge likely cannot be traveled alone but must be traveled with close others who can help shed light on our blind spots

    The transparency of quantitative empirical legal research published in highly ranked law journals (2018–2020): an observational study [version 1; peer review: 2 approved]

    Get PDF
    Background: Scientists are increasingly concerned with making their work easy to verify and build upon. Associated practices include sharing data, materials, and analytic scripts, and preregistering protocols. This shift towards increased transparency and rigor has been referred to as a “credibility revolution.” The credibility of empirical legal research has been questioned in the past due to its distinctive peer review system and because the legal background of its researchers means that many often are not trained in study design or statistics. Still, there has been no systematic study of transparency and credibility-related characteristics of published empirical legal research. Methods: To fill this gap and provide an estimate of current practices that can be tracked as the field evolves, we assessed 300 empirical articles from highly ranked law journals including both faculty-edited journals and student-edited journals. Results: We found high levels of article accessibility, especially among student-edited journals. Few articles stated that a study’s data are available. Preregistration and availability of analytic scripts were very uncommon. Conclusion: We suggest that empirical legal researchers and the journals that publish their work cultivate norms and practices to encourage research credibility. Our estimates may be revisited to track the field’s progress in the coming years

    Use caution when applying behavioural science to policy

    Get PDF
    Social and behavioural scientists have attempted to speak to the COVID-19 crisis. But is behavioural research on COVID-19 suitable for making policy decisions? We offer a taxonomy that lets our science advance in ‘evidence readiness levels’ to be suitable for policy. We caution practitioners to take extreme care translating our findings to applications

    The Transparency of Quantitative Empirical Legal Research (2018–2020)

    Get PDF
    Scientists are increasingly concerned with making their work easy to verify and build upon. Associated practices include sharing data, materials, and analytic scripts, and preregistering protocols. This has been referred to as a “credibility revolution”. The credibility of empirical legal research has been questioned in the past due to its distinctive peer review system and because the legal background of its researchers means that many often are not trained in study design or statistics. Still, there has been no systematic study of transparency and credibilityrelated characteristics of published empirical legal research. To fill this gap and provide an estimate of current practices that can be tracked as the field evolves, we assessed 300 empirical articles from highly ranked law journals including both faculty-edited journals and student-edited journals. We found high levels of article accessibility (86% could be accessed without a subscription, 95% CI = [82%, 90%]), especially among student-edited journals (100% accessibility). Few articles stated that a study’s data are available, (19%, 95% CI = [15%, 23%]), and only about half of those datasets are reportedly available without contacting the author. Preregistration (3%, 95% CI = [1%, 5%]) and availability of analytic scripts (6%, 95% = [4%, 9%]) were very uncommon. We suggest that empirical legal researchers and the journals that publish their work cultivate norms and practices to encourage research credibility

    Replicability, Robustness, and Reproducibility in Psychological Science

    Get PDF
    Replication—an important, uncommon, and misunderstood practice—is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understandings to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understandings and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges such as disincentives to conduct replications and a tendency to frame replication as a personal attack rather than a healthy scientific practice, and they raised awareness that replication contributes to self-correction. Nevertheless, innovation in doing and understanding replication and its cousins, reproducibility and robustness, has positioned psychology to improve research practices and accelerate progress
    corecore