12 research outputs found

    Web-CDI: A system for online administration of the MacArthur-Bates Communicative Development Inventories

    Get PDF
    Understanding the mechanisms that drive variation in children’s language acquisition requires large, population-representative datasets of children’s word learning across development. Parent report measures such as the MacArthur-Bates Communicative Development Inventories (CDI) are commonly used to collect such data, but the traditional paper-based forms make the curation of large datasets logistically challenging. Many CDI datasets are thus gathered using convenience samples, often recruited from communities in proximity to major research institutions. Here, we introduce Web-CDI, a web-based tool which allows researchers to collect CDI data online. Web-CDI contains functionality to collect and manage longitudinal data, share links to test administrations, and download vocabulary scores. To date, over 3,500 valid Web-CDI administrations have been completed. General trends found in past norming studies of the CDI are present in data collected from Web-CDI: scores of children’s productive vocabulary grow with age, female children show a slightly faster rate of vocabulary growth, and participants with higher levels of educational attainment report slightly higher vocabulary production scores than those with lower levels of education attainment. We also report results from an effort to oversample non-white, lower-education participants via online recruitment (N = 241). These data showed similar demographic trends to the full sample but this effort resulted in a high exclusion rate. We conclude by discussing implications and challenges for the collection of large, population-representative datasets

    Quantifying Sources of Variability in Infancy Research Using the Infant-Directed-Speech Preference

    Get PDF
    Psychological scientists have become increasingly concerned with issues related to methodology and replicability, and infancy researchers in particular face specific challenges related to replicability: For example, high-powered studies are difficult to conduct, testing conditions vary across labs, and different labs have access to different infant populations. Addressing these concerns, we report on a large-scale, multisite study aimed at (a) assessing the overall replicability of a single theoretically important phenomenon and (b) examining methodological, cultural, and developmental moderators. We focus on infants’ preference for infant-directed speech (IDS) over adult-directed speech (ADS). Stimuli of mothers speaking to their infants and to an adult in North American English were created using seminaturalistic laboratory-based audio recordings. Infants’ relative preference for IDS and ADS was assessed across 67 laboratories in North America, Europe, Australia, and Asia using the three common methods for measuring infants’ discrimination (head-turn preference, central fixation, and eye tracking). The overall meta-analytic effect size (Cohen’s d) was 0.35, 95% confidence interval = [0.29, 0.42], which was reliably above zero but smaller than the meta-analytic mean computed from previous literature (0.67). The IDS preference was significantly stronger in older children, in those children for whom the stimuli matched their native language and dialect, and in data from labs using the head-turn preference procedure. Together, these findings replicate the IDS preference but suggest that its magnitude is modulated by development, native-language experience, and testing procedure

    Many Labs 5:Testing pre-data collection peer review as an intervention to increase replicability

    Get PDF
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3?9; median total sample = 1,279.5, range = 276?3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (?r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00?.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19?.50)

    Preliminary genetic analysis of wool-shedding ability in sheep

    No full text
    Self-shedding breeds are an appealing choice for prime lamb producers that want to eliminate the need for shearing. Occurring naturally in Spring, the whole fleece or a significant portion of the fleece can be shed. Assessed through visual shedding scores, the degree of genetic variation in wool-shedding ability exhibited moderate to strong genetic variation in UK and American flocks (Pollot 2011; Matika et al. 2013; Vargas Juardo et al. 2020). This study aims to conduct preliminary genetic analysis of wool-shedding ability in an Australian composite flock

    Web-CDI: A system for online administration of the MacArthur-Bates Communicative Development Inventories

    Get PDF
    Understanding the mechanisms that drive variation in children's language acquisition requires large, population-representative datasets of children’s word learning across development. Parent report measures such as the MacArthur-Bates Communicative Development Inventories (CDI) are commonly used to collect such data, but the traditional paper-based forms make the curation of large datasets logistically challenging. Many CDI datasets are thus gathered using convenience samples, often recruited from communities in proximity to major research institutions. Here, we introduce Web-CDI, a web-based tool which allows researchers to collect CDI data online. Web-CDI contains functionality to collect and manage longitudinal data, share links to test administrations, and download vocabulary scores. To date, over 3,500 valid Web-CDI administrations have been completed. General trends found in past norming studies of the CDI are present in data collected from Web-CDI: scores of children's productive vocabulary grow with age, female children show a slightly faster rate of vocabulary growth, and participants with higher levels of educational attainment report slightly higher vocabulary production scores than those with lower levels of education attainment. We also report results from an effort to oversample non-white, lower-education participants via online recruitment (N = 241). These data showed similar demographic trends to the full sample but this effort resulted in a high exclusion rate. We conclude by discussing implications and challenges for the collection of large, population-representative datasets

    Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

    No full text
    none172siReplication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p &lt;.05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr =.002 or.014, depending on analytic approach). The median effect size for the revised protocols (r =.05) was similar to that of the RP:P protocols (r =.04) and the original RP:P replications (r =.11), and smaller than that of the original studies (r =.37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r =.07, range =.00–.15) were 78% smaller, on average, than the original effect sizes (median r =.37, range =.19–.50).mixedEbersole C.R.; Mathur M.B.; Baranski E.; Bart-Plange D.-J.; Buttrick N.R.; Chartier C.R.; Corker K.S.; Corley M.; Hartshorne J.K.; IJzerman H.; Lazarevic L.B.; Rabagliati H.; Ropovik I.; Aczel B.; Aeschbach L.F.; Andrighetto L.; Arnal J.D.; Arrow H.; Babincak P.; Bakos B.E.; Banik G.; Baskin E.; Belopavlovic R.; Bernstein M.H.; Bialek M.; Bloxsom N.G.; Bodroza B.; Bonfiglio D.B.V.; Boucher L.; Bruhlmann F.; Brumbaugh C.C.; Casini E.; Chen Y.; Chiorri C.; Chopik W.J.; Christ O.; Ciunci A.M.; Claypool H.M.; Coary S.; Colic M.V.; Collins W.M.; Curran P.G.; Day C.R.; Dering B.; Dreber A.; Edlund J.E.; Falcao F.; Fedor A.; Feinberg L.; Ferguson I.R.; Ford M.; Frank M.C.; Fryberger E.; Garinther A.; Gawryluk K.; Ashbaugh K.; Giacomantonio M.; Giessner S.R.; Grahe J.E.; Guadagno R.E.; Halasa E.; Hancock P.J.B.; Hilliard R.A.; Huffmeier J.; Hughes S.; Idzikowska K.; Inzlicht M.; Jern A.; Jimenez-Leal W.; Johannesson M.; Joy-Gaba J.A.; Kauff M.; Kellier D.J.; Kessinger G.; Kidwell M.C.; Kimbrough A.M.; King J.P.J.; Kolb V.S.; Kolodziej S.; Kovacs M.; Krasuska K.; Kraus S.; Krueger L.E.; Kuchno K.; Lage C.A.; Langford E.V.; Levitan C.A.; de Lima T.J.S.; Lin H.; Lins S.; Loy J.E.; Manfredi D.; Markiewicz L.; Menon M.; Mercier B.; Metzger M.; Meyet V.; Millen A.E.; Miller J.K.; Montealegre A.; Moore D.A.; Muda R.; Nave G.; Nichols A.L.; Novak S.A.; Nunnally C.; Orlic A.; Palinkas A.; Panno A.; Parks K.P.; Pedovic I.; Pekala E.; Penner M.R.; Pessers S.; Petrovic B.; Pfeiffer T.; Pienkosz D.; Preti E.; Puric D.; Ramos T.; Ravid J.; Razza T.S.; Rentzsch K.; Richetin J.; Rife S.C.; Rosa A.D.; Rudy K.H.; Salamon J.; Saunders B.; Sawicki P.; Schmidt K.; Schuepfer K.; Schultze T.; Schulz-Hardt S.; Schutz A.; Shabazian A.N.; Shubella R.L.; Siegel A.; Silva R.; Sioma B.; Skorb L.; de Souza L.E.C.; Steegen S.; Stein L.A.R.; Sternglanz R.W.; Stojilovic D.; Storage D.; Sullivan G.B.; Szaszi B.; Szecsi P.; Szoke O.; Szuts A.; Thomae M.; Tidwell N.D.; Tocco C.; Torka A.-K.; Tuerlinckx F.; Vanpaemel W.; Vaughn L.A.; Vianello M.; Viganola D.; Vlachou M.; Walker R.J.; Weissgerber S.C.; Wichman A.L.; Wiggins B.J.; Wolf D.; Wood M.J.; Zealley D.; Zezelj I.; Zrubka M.; Nosek B.A.Ebersole, C. R.; Mathur, M. B.; Baranski, E.; Bart-Plange, D. -J.; Buttrick, N. R.; Chartier, C. R.; Corker, K. S.; Corley, M.; Hartshorne, J. K.; Ijzerman, H.; Lazarevic, L. B.; Rabagliati, H.; Ropovik, I.; Aczel, B.; Aeschbach, L. F.; Andrighetto, L.; Arnal, J. D.; Arrow, H.; Babincak, P.; Bakos, B. E.; Banik, G.; Baskin, E.; Belopavlovic, R.; Bernstein, M. H.; Bialek, M.; Bloxsom, N. G.; Bodroza, B.; Bonfiglio, D. B. V.; Boucher, L.; Bruhlmann, F.; Brumbaugh, C. C.; Casini, E.; Chen, Y.; Chiorri, C.; Chopik, W. J.; Christ, O.; Ciunci, A. M.; Claypool, H. M.; Coary, S.; Colic, M. V.; Collins, W. M.; Curran, P. G.; Day, C. R.; Dering, B.; Dreber, A.; Edlund, J. E.; Falcao, F.; Fedor, A.; Feinberg, L.; Ferguson, I. R.; Ford, M.; Frank, M. C.; Fryberger, E.; Garinther, A.; Gawryluk, K.; Ashbaugh, K.; Giacomantonio, M.; Giessner, S. R.; Grahe, J. E.; Guadagno, R. E.; Halasa, E.; Hancock, P. J. B.; Hilliard, R. A.; Huffmeier, J.; Hughes, S.; Idzikowska, K.; Inzlicht, M.; Jern, A.; Jimenez-Leal, W.; Johannesson, M.; Joy-Gaba, J. A.; Kauff, M.; Kellier, D. J.; Kessinger, G.; Kidwell, M. C.; Kimbrough, A. M.; King, J. P. J.; Kolb, V. S.; Kolodziej, S.; Kovacs, M.; Krasuska, K.; Kraus, S.; Krueger, L. E.; Kuchno, K.; Lage, C. A.; Langford, E. V.; Levitan, C. A.; de Lima, T. J. S.; Lin, H.; Lins, S.; Loy, J. E.; Manfredi, D.; Markiewicz, L.; Menon, M.; Mercier, B.; Metzger, M.; Meyet, V.; Millen, A. E.; Miller, J. K.; Montealegre, A.; Moore, D. A.; Muda, R.; Nave, G.; Nichols, A. L.; Novak, S. A.; Nunnally, C.; Orlic, A.; Palinkas, A.; Panno, A.; Parks, K. P.; Pedovic, I.; Pekala, E.; Penner, M. R.; Pessers, S.; Petrovic, B.; Pfeiffer, T.; Pienkosz, D.; Preti, E.; Puric, D.; Ramos, T.; Ravid, J.; Razza, T. S.; Rentzsch, K.; Richetin, J.; Rife, S. C.; Rosa, A. D.; Rudy, K. H.; Salamon, J.; Saunders, B.; Sawicki, P.; Schmidt, K.; Schuepfer, K.; Schultze, T.; Schulz-Hardt, S.; Schutz, A.; Shabazian, A. N.; Shubella, R. L.; Siegel, A.; Silva, R.; Sioma, B.; Skorb, L.; de Souza, L. E. C.; Steegen, S.; Stein, L. A. R.; Sternglanz, R. W.; Stojilovic, D.; Storage, D.; Sullivan, G. B.; Szaszi, B.; Szecsi, P.; Szoke, O.; Szuts, A.; Thomae, M.; Tidwell, N. D.; Tocco, C.; Torka, A. -K.; Tuerlinckx, F.; Vanpaemel, W.; Vaughn, L. A.; Vianello, M.; Viganola, D.; Vlachou, M.; Walker, R. J.; Weissgerber, S. C.; Wichman, A. L.; Wiggins, B. J.; Wolf, D.; Wood, M. J.; Zealley, D.; Zezelj, I.; Zrubka, M.; Nosek, B. A

    Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

    No full text
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p lt .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr =.002 or.014, depending on analytic approach). The median effect size for the revised protocols (r =.05) was similar to that of the RP:P protocols (r =.04) and the original RP:P replications (r =.11), and smaller than that of the original studies (r =.37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r =.07, range =.00–.15) were 78% smaller, on average, than the original effect sizes (median r =.37, range =.19–.50)
    corecore