38 research outputs found

    Data from a pre-publication independent replication initiative examining ten moral judgement effects

    Get PDF
    We present the data from a crowdsourced project seeking to replicate findings in independent laboratories before (rather than after) they are published. In this Pre-Publication Independent Replication (PPIR) initiative, 25 research groups attempted to replicate 10 moral judgment effects from a single laboratory's research pipeline of unpublished findings. The 10 effects were investigated using online/lab surveys containing psychological manipulations (vignettes) followed by questionnaires. Results revealed a mix of reliable, unreliable, and culturally moderated findings. Unlike any previous replication project, this dataset includes the data from not only the replications but also from the original studies, creating a unique corpus that researchers can use to better understand reproducibility and irreproducibility in science

    The pipeline project: Pre-publication independent replications of a single laboratory's research pipeline

    Get PDF
    This crowdsourced project introduces a collaborative approach to improving the reproducibility of scientific research, in which findings are replicated in qualified independent laboratories before (rather than after) they are published. Our goal is to establish a non-adversarial replication process with highly informative final results. To illustrate the Pre-Publication Independent Replication (PPIR) approach, 25 research groups conducted replications of all ten moral judgment effects which the last author and his collaborators had “in the pipeline” as of August 2014. Six findings replicated according to all replication criteria, one finding replicated but with a significantly smaller effect size than the original, one finding replicated consistently in the original culture but not outside of it, and two findings failed to find support. In total, 40% of the original findings failed at least one major replication criterion. Potential ways to implement and incentivize pre-publication independent replication on a large scale are discussed

    The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network

    Get PDF
    Source at https://doi.org/10.1177/2515245918797607.Concerns about the veracity of psychological research have been growing. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions or replicate prior research in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time limited), efficient (in that structures and principles are reused for different projects), decentralized, diverse (in both subjects and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside the network). The PSA and other approaches to crowdsourced psychological science will advance understanding of mental processes and behaviors by enabling rigorous research and systematic examination of its generalizability

    Many Labs 5:Testing pre-data collection peer review as an intervention to increase replicability

    Get PDF
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3?9; median total sample = 1,279.5, range = 276?3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (?r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00?.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19?.50)

    Crowdsourcing hypothesis tests: Making transparent how design choices shape research results

    Get PDF
    To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer fiveoriginal research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams renderedstatistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.</div

    The Psychological Science Accelerator's COVID-19 rapid-response dataset

    Get PDF

    A global experiment on motivating social distancing during the COVID-19 pandemic

    Get PDF
    Finding communication strategies that effectively motivate social distancing continues to be a global public health priority during the COVID-19 pandemic. This cross-country, preregistered experiment (n = 25,718 from 89 countries) tested hypotheses concerning generalizable positive and negative outcomes of social distancing messages that promoted personal agency and reflective choices (i.e., an autonomy-supportive message) or were restrictive and shaming (i.e., a controlling message) compared with no message at all. Results partially supported experimental hypotheses in that the controlling message increased controlled motivation (a poorly internalized form of motivation relying on shame, guilt, and fear of social consequences) relative to no message. On the other hand, the autonomy-supportive message lowered feelings of defiance compared with the controlling message, but the controlling message did not differ from receiving no message at all. Unexpectedly, messages did not influence autonomous motivation (a highly internalized form of motivation relying on one’s core values) or behavioral intentions. Results supported hypothesized associations between people’s existing autonomous and controlled motivations and self-reported behavioral intentions to engage in social distancing. Controlled motivation was associated with more defiance and less long-term behavioral intention to engage in social distancing, whereas autonomous motivation was associated with less defiance and more short- and long-term intentions to social distance. Overall, this work highlights the potential harm of using shaming and pressuring language in public health communication, with implications for the current and future global health challenges

    The psychological science accelerator’s COVID-19 rapid-response dataset

    Get PDF
    In response to the COVID-19 pandemic, the Psychological Science Accelerator coordinated three large-scale psychological studies to examine the effects of loss-gain framing, cognitive reappraisals, and autonomy framing manipulations on behavioral intentions and affective measures. The data collected (April to October 2020) included specific measures for each experimental study, a general questionnaire examining health prevention behaviors and COVID-19 experience, geographical and cultural context characterization, and demographic information for each participant. Each participant started the study with the same general questions and then was randomized to complete either one longer experiment or two shorter experiments. Data were provided by 73,223 participants with varying completion rates. Participants completed the survey from 111 geopolitical regions in 44 unique languages/dialects. The anonymized dataset described here is provided in both raw and processed formats to facilitate re-use and further analyses. The dataset offers secondary analytic opportunities to explore coping, framing, and self-determination across a diverse, global sample obtained at the onset of the COVID-19 pandemic, which can be merged with other time-sampled or geographic data

    To which world regions does the valence–dominance model of social perception apply?

    Get PDF
    Over the past 10 years, Oosterhof and Todorov’s valence–dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgements of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov’s methodology across 11 world regions, 41 countries and 11,570 participants. When we used Oosterhof and Todorov’s original analysis strategy, the valence–dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions, we observed much less generalization. Collectively, these results suggest that, while the valence–dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods and correlate and rotate the dimension reduction solution.C.L. was supported by the Vienna Science and Technology Fund (WWTF VRG13-007); L.M.D. was supported by ERC 647910 (KINSHIP); D.I.B. and N.I. received funding from CONICET, Argentina; L.K., F.K. and Á. Putz were supported by the European Social Fund (EFOP-3.6.1.-16-2016-00004; ‘Comprehensive Development for Implementing Smart Specialization Strategies at the University of Pécs’). K.U. and E. Vergauwe were supported by a grant from the Swiss National Science Foundation (PZ00P1_154911 to E. Vergauwe). T.G. is supported by the Social Sciences and Humanities Research Council of Canada (SSHRC). M.A.V. was supported by grants 2016-T1/SOC-1395 (Comunidad de Madrid) and PSI2017-85159-P (AEI/FEDER UE). K.B. was supported by a grant from the National Science Centre, Poland (number 2015/19/D/HS6/00641). J. Bonick and J.W.L. were supported by the Joep Lange Institute. G.B. was supported by the Slovak Research and Development Agency (APVV-17-0418). H.I.J. and E.S. were supported by a French National Research Agency ‘Investissements d’Avenir’ programme grant (ANR-15-IDEX-02). T.D.G. was supported by an Australian Government Research Training Program Scholarship. The Raipur Group is thankful to: (1) the University Grants Commission, New Delhi, India for the research grants received through its SAP-DRS (Phase-III) scheme sanctioned to the School of Studies in Life Science; and (2) the Center for Translational Chronobiology at the School of Studies in Life Science, PRSU, Raipur, India for providing logistical support. K. Ask was supported by a small grant from the Department of Psychology, University of Gothenburg. Y.Q. was supported by grants from the Beijing Natural Science Foundation (5184035) and CAS Key Laboratory of Behavioral Science, Institute of Psychology. N.A.C. was supported by the National Science Foundation Graduate Research Fellowship (R010138018). We acknowledge the following research assistants: J. Muriithi and J. Ngugi (United States International University Africa); E. Adamo, D. Cafaro, V. Ciambrone, F. Dolce and E. Tolomeo (Magna Græcia University of Catanzaro); E. De Stefano (University of Padova); S. A. Escobar Abadia (University of Lincoln); L. E. Grimstad (Norwegian School of Economics (NHH)); L. C. Zamora (Franklin and Marshall College); R. E. Liang and R. C. Lo (Universiti Tunku Abdul Rahman); A. Short and L. Allen (Massey University, New Zealand), A. Ateş, E. Güneş and S. Can Özdemir (Boğaziçi University); I. Pedersen and T. Roos (Åbo Akademi University); N. Paetz (Escuela de Comunicación Mónica Herrera); J. Green (University of Gothenburg); M. Krainz (University of Vienna, Austria); and B. Todorova (University of Vienna, Austria). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.https://www.nature.com/nathumbehav/am2023BiochemistryGeneticsMicrobiology and Plant Patholog
    corecore