4 research outputs found

    Many Labs 5:Testing pre-data collection peer review as an intervention to increase replicability

    Get PDF
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3?9; median total sample = 1,279.5, range = 276?3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (?r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00?.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19?.50)

    The trustworthiness of the cumulative knowledge in industrial/organizational psychology: The current state of affairs and a path forward

    Get PDF
    The goal of industrial/organizational (IO) psychology, is to build and organize trustworthy knowledge about people-related phenomena in the workplace. Unfortunately, as with other scientific disciplines, our discipline may be experiencing a “crisis of confidence” stemming from the lack of reproducibility and replicability of many of our field's research findings, which would suggest that much of our research may be untrustworthy. If a scientific discipline's research is deemed untrustworthy, it can have dire consequences, including the withdraw of funding for future research. In this focal article, we review the current state of reproducibility and replicability in IO psychology and related fields. As part of this review, we discuss factors that make it less likely that research findings will be trustworthy, including the prevalence of scientific misconduct, questionable research practices (QRPs), and errors. We then identify some root causes of these issues and provide several potential remedies. In particular, we highlight the need for improved research methods and statistics training as well as a re-alignment of the incentive structure in academia. To accomplish this, we advocate for changes in the reward structure, improvements to the peer review process, and the implementation of open science practices. Overall, addressing the current “crisis of confidence” in IO psychology requires individual researchers, academic institutions, and publishers to embrace system-wide change

    How well are open science practices implemented in industrial and organizational psychology and management?

    Full text link
    To address the low reproducibility and replicability of research, Open Science Practices (OSPs) have been developed. Yet, despite increasing awareness of their potential benefits, there has been only little implementation. As journals can act as gatekeepers for scientific discoveries, a potential tendency not to mention OSPs on their websites may help to explain this implementation gap. Therefore, we examined the implementation of OSPs and potential barriers in industrial and organizational psychology and management (IOP/management) journals. Study 1 examined whether and how N = 257 journal websites referred to OSPs. We found that most journals did not mention OSPs. Specifically, only two (1.0%), five (2.5%), and 14 (6.9%) IOP/management journals mentioned preregistration, registered reports, and explicitly welcomed replications, respectively. Study 2 investigated perceived barriers to implementing OSPs with a survey among editors of the IOP/management journals from Study 1. Among the 40 responding editors, 14, 10, and five attributed the lacking implementation of OSPs to a lesser suitability of OSPs for qualitative research, missing authority, and missing familiarity with OSPs, respectively. Based on our findings, the implementation gap could be mitigated by developing new and refining extant OSPs, starting bottom-up initiatives (e.g., researchers directly contacting publishers), and increasing the availability of information on OSPs

    Many Labs 5: Registered Replication of Vohs and Schooler (2008), Experiment 1

    No full text
    Does convincing people that free will is an illusion reduce their sense of personal responsibility? Vohs and Schooler (2008) found that participants reading from a passage "debunking" free will cheated more on experimental tasks than did those reading from a control passage, an effect mediated by decreased belief in free will. However, this finding was not replicated by Embley, Johnson, and Giner-Sorolla (2015), who found that reading arguments against free will had no effect on cheating in their sample. The present study investigated whether hard-to-understand arguments against free will and a low-reliability measure of free-will beliefs account for Embley et al.'s failure to replicate Vohs and Schooler's results. Participants (N = 621) were randomly assigned to participate in either a close replication of Vohs and Schooler's Experiment 1 based on the materials of Embley et al. or a revised protocol, which used an easier-to-understand free-will-belief manipulation and an improved instrument to measure free will. We found that the revisions did not matter. Although the revised measure of belief in free will had better reliability than the original measure, an analysis of the data from the two protocols combined indicated that free-will beliefs were unchanged by the manipulations, d = 0.064, 95% confidence interval = [−0.087, 0.22], and in the focal test, there were no differences in cheating behavior between conditions, d = 0.076, 95% CI = [−0.082, 0.22]. We found that expressed free-will beliefs did not mediate the link between the free-will-belief manipulation and cheating, and in exploratory follow-up analyses, we found that participants expressing lower beliefs in free will were not more likely to cheat in our task
    corecore