11 research outputs found

    Many Labs 5:Testing pre-data collection peer review as an intervention to increase replicability

    Get PDF
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3?9; median total sample = 1,279.5, range = 276?3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (?r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00?.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19?.50)

    Unwanted effects of X-rays in surface grafted copper(ii) organometallics and copper exchanged zeolites, how they manifest, and what can be done about them

    No full text
    Copper(II) containing materials are widely studied for a very diverse array of applications from biology, through catalysis, to many other materials chemistry based applications. We show that, for grafted copper compounds at the surface of silica, and for the study of the selective conversion of methane to methanol using copper ion-exchanged zeolites, the application of focused X-ray beams for spectroscopic investigations is subject to significant challenges. We demonstrate how unwanted effects due to the X-rays manifest, which can prevent the study of certain types of reactive systems, and/or lead to the derivation of results that are not at all representative of the behavior of the materials in question. With reference to identical studies conducted at a beamline that does not focus its X-rays, we then delineate how the total photon throughput and the brilliance of the applied X-rays affect the apparent behavior of copper in zeolites during the stepwise, high temperature and aerobic activation approach to the selective conversion of methane to methanol. We show that the use of increasingly brilliant X-ray sources for X-ray spectroscopy can bring with it significant caveats to obtaining valid and quantitative structure–reactivity relationships (QSARS) and kinetics for this class of material. Lastly, through a systematic study of these effects, we suggest ways to ensure that valuable allocations of X-ray beam time result in measurements that reflect the real nature of the chemistry under study and not that due to other, extraneous, factors.ISSN:1463-9084ISSN:1463-907

    Many Labs 5: Registered Multisite Replication of the Tempting-Fate Effects in Risen and Gilovich (2008)

    No full text
    Risen and Gilovich (2008) found that subjects believed that “tempting fate” would be punished with ironic bad outcomes (a main effect), and that this effect was magnified when subjects were under cognitive load (an interaction). A previous replication study (Frank & Mathur, 2016) that used an online implementation of the protocol on Amazon Mechanical Turk failed to replicate both the main effect and the interaction. Before this replication was run, the authors of the original study expressed concern that the cognitive-load manipulation may be less effective when implemented online than when implemented in the lab and that subjects recruited online may also respond differently to the specific experimental scenario chosen for the replication. A later, large replication project, Many Labs 2 (Klein et al. 2018), replicated the main effect (though the effect size was smaller than in the original study), but the interaction was not assessed. Attempting to replicate the interaction while addressing the original authors’ concerns regarding the protocol for the first replication study, we developed a new protocol in collaboration with the original authors. We used four university sites (N = 754) chosen for similarity to the site of the original study to conduct a high-powered, preregistered replication focused primarily on the interaction effect. Results from these sites did not support the interaction or the main effect and were comparable to results obtained at six additional universities that were less similar to the original site. Post hoc analyses did not provide strong evidence for statistical inconsistency between the original study’s estimates and our estimates; that is, the original study’s results would not have been extremely unlikely in the estimated distribution of population effects in our sites. We also collected data from a new Mechanical Turk sample under the first replication study’s protocol, and results were not meaningfully different from those obtained with the new protocol at universities similar to the original site. Secondary analyses failed to support proposed substantive mechanisms for the failure to replicate

    Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

    No full text
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p lt .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr =.002 or.014, depending on analytic approach). The median effect size for the revised protocols (r =.05) was similar to that of the RP:P protocols (r =.04) and the original RP:P replications (r =.11), and smaller than that of the original studies (r =.37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r =.07, range =.00–.15) were 78% smaller, on average, than the original effect sizes (median r =.37, range =.19–.50)

    Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

    No full text
    none172siReplication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p &lt;.05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr =.002 or.014, depending on analytic approach). The median effect size for the revised protocols (r =.05) was similar to that of the RP:P protocols (r =.04) and the original RP:P replications (r =.11), and smaller than that of the original studies (r =.37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r =.07, range =.00–.15) were 78% smaller, on average, than the original effect sizes (median r =.37, range =.19–.50).mixedEbersole C.R.; Mathur M.B.; Baranski E.; Bart-Plange D.-J.; Buttrick N.R.; Chartier C.R.; Corker K.S.; Corley M.; Hartshorne J.K.; IJzerman H.; Lazarevic L.B.; Rabagliati H.; Ropovik I.; Aczel B.; Aeschbach L.F.; Andrighetto L.; Arnal J.D.; Arrow H.; Babincak P.; Bakos B.E.; Banik G.; Baskin E.; Belopavlovic R.; Bernstein M.H.; Bialek M.; Bloxsom N.G.; Bodroza B.; Bonfiglio D.B.V.; Boucher L.; Bruhlmann F.; Brumbaugh C.C.; Casini E.; Chen Y.; Chiorri C.; Chopik W.J.; Christ O.; Ciunci A.M.; Claypool H.M.; Coary S.; Colic M.V.; Collins W.M.; Curran P.G.; Day C.R.; Dering B.; Dreber A.; Edlund J.E.; Falcao F.; Fedor A.; Feinberg L.; Ferguson I.R.; Ford M.; Frank M.C.; Fryberger E.; Garinther A.; Gawryluk K.; Ashbaugh K.; Giacomantonio M.; Giessner S.R.; Grahe J.E.; Guadagno R.E.; Halasa E.; Hancock P.J.B.; Hilliard R.A.; Huffmeier J.; Hughes S.; Idzikowska K.; Inzlicht M.; Jern A.; Jimenez-Leal W.; Johannesson M.; Joy-Gaba J.A.; Kauff M.; Kellier D.J.; Kessinger G.; Kidwell M.C.; Kimbrough A.M.; King J.P.J.; Kolb V.S.; Kolodziej S.; Kovacs M.; Krasuska K.; Kraus S.; Krueger L.E.; Kuchno K.; Lage C.A.; Langford E.V.; Levitan C.A.; de Lima T.J.S.; Lin H.; Lins S.; Loy J.E.; Manfredi D.; Markiewicz L.; Menon M.; Mercier B.; Metzger M.; Meyet V.; Millen A.E.; Miller J.K.; Montealegre A.; Moore D.A.; Muda R.; Nave G.; Nichols A.L.; Novak S.A.; Nunnally C.; Orlic A.; Palinkas A.; Panno A.; Parks K.P.; Pedovic I.; Pekala E.; Penner M.R.; Pessers S.; Petrovic B.; Pfeiffer T.; Pienkosz D.; Preti E.; Puric D.; Ramos T.; Ravid J.; Razza T.S.; Rentzsch K.; Richetin J.; Rife S.C.; Rosa A.D.; Rudy K.H.; Salamon J.; Saunders B.; Sawicki P.; Schmidt K.; Schuepfer K.; Schultze T.; Schulz-Hardt S.; Schutz A.; Shabazian A.N.; Shubella R.L.; Siegel A.; Silva R.; Sioma B.; Skorb L.; de Souza L.E.C.; Steegen S.; Stein L.A.R.; Sternglanz R.W.; Stojilovic D.; Storage D.; Sullivan G.B.; Szaszi B.; Szecsi P.; Szoke O.; Szuts A.; Thomae M.; Tidwell N.D.; Tocco C.; Torka A.-K.; Tuerlinckx F.; Vanpaemel W.; Vaughn L.A.; Vianello M.; Viganola D.; Vlachou M.; Walker R.J.; Weissgerber S.C.; Wichman A.L.; Wiggins B.J.; Wolf D.; Wood M.J.; Zealley D.; Zezelj I.; Zrubka M.; Nosek B.A.Ebersole, C. R.; Mathur, M. B.; Baranski, E.; Bart-Plange, D. -J.; Buttrick, N. R.; Chartier, C. R.; Corker, K. S.; Corley, M.; Hartshorne, J. K.; Ijzerman, H.; Lazarevic, L. B.; Rabagliati, H.; Ropovik, I.; Aczel, B.; Aeschbach, L. F.; Andrighetto, L.; Arnal, J. D.; Arrow, H.; Babincak, P.; Bakos, B. E.; Banik, G.; Baskin, E.; Belopavlovic, R.; Bernstein, M. H.; Bialek, M.; Bloxsom, N. G.; Bodroza, B.; Bonfiglio, D. B. V.; Boucher, L.; Bruhlmann, F.; Brumbaugh, C. C.; Casini, E.; Chen, Y.; Chiorri, C.; Chopik, W. J.; Christ, O.; Ciunci, A. M.; Claypool, H. M.; Coary, S.; Colic, M. V.; Collins, W. M.; Curran, P. G.; Day, C. R.; Dering, B.; Dreber, A.; Edlund, J. E.; Falcao, F.; Fedor, A.; Feinberg, L.; Ferguson, I. R.; Ford, M.; Frank, M. C.; Fryberger, E.; Garinther, A.; Gawryluk, K.; Ashbaugh, K.; Giacomantonio, M.; Giessner, S. R.; Grahe, J. E.; Guadagno, R. E.; Halasa, E.; Hancock, P. J. B.; Hilliard, R. A.; Huffmeier, J.; Hughes, S.; Idzikowska, K.; Inzlicht, M.; Jern, A.; Jimenez-Leal, W.; Johannesson, M.; Joy-Gaba, J. A.; Kauff, M.; Kellier, D. J.; Kessinger, G.; Kidwell, M. C.; Kimbrough, A. M.; King, J. P. J.; Kolb, V. S.; Kolodziej, S.; Kovacs, M.; Krasuska, K.; Kraus, S.; Krueger, L. E.; Kuchno, K.; Lage, C. A.; Langford, E. V.; Levitan, C. A.; de Lima, T. J. S.; Lin, H.; Lins, S.; Loy, J. E.; Manfredi, D.; Markiewicz, L.; Menon, M.; Mercier, B.; Metzger, M.; Meyet, V.; Millen, A. E.; Miller, J. K.; Montealegre, A.; Moore, D. A.; Muda, R.; Nave, G.; Nichols, A. L.; Novak, S. A.; Nunnally, C.; Orlic, A.; Palinkas, A.; Panno, A.; Parks, K. P.; Pedovic, I.; Pekala, E.; Penner, M. R.; Pessers, S.; Petrovic, B.; Pfeiffer, T.; Pienkosz, D.; Preti, E.; Puric, D.; Ramos, T.; Ravid, J.; Razza, T. S.; Rentzsch, K.; Richetin, J.; Rife, S. C.; Rosa, A. D.; Rudy, K. H.; Salamon, J.; Saunders, B.; Sawicki, P.; Schmidt, K.; Schuepfer, K.; Schultze, T.; Schulz-Hardt, S.; Schutz, A.; Shabazian, A. N.; Shubella, R. L.; Siegel, A.; Silva, R.; Sioma, B.; Skorb, L.; de Souza, L. E. C.; Steegen, S.; Stein, L. A. R.; Sternglanz, R. W.; Stojilovic, D.; Storage, D.; Sullivan, G. B.; Szaszi, B.; Szecsi, P.; Szoke, O.; Szuts, A.; Thomae, M.; Tidwell, N. D.; Tocco, C.; Torka, A. -K.; Tuerlinckx, F.; Vanpaemel, W.; Vaughn, L. A.; Vianello, M.; Viganola, D.; Vlachou, M.; Walker, R. J.; Weissgerber, S. C.; Wichman, A. L.; Wiggins, B. J.; Wolf, D.; Wood, M. J.; Zealley, D.; Zezelj, I.; Zrubka, M.; Nosek, B. A

    One-pot access to a privileged library of six membered nitrogenous heterocycles through multi-component cascade approach

    No full text

    GesÀttigte, zweiwertige, einbasische SÀuren (OxysÀuren)

    No full text
    corecore