11 research outputs found

    Early metacognitive abilities: The interplay of monitoring and control processes in 5- to 7-year-old children

    No full text
    The goal of the current investigation was to compare two monitoring processes (judgments of learning [JOLs] and confidence judgments [CJs]) and their corresponding control processes (allocation of study time and selection of answers to maximize accuracy, respectively) in 5- to 7-year-old children (N=101). Children learned the meaning of Japanese characters and provided JOLs after a study phase and CJs after a memory test. They were given the opportunity to control their learning in self-paced study phases, and to control their accuracy by placing correct answers into a treasure chest and incorrect answers into a trash can. All three age groups gave significantly higher CJs for correct compared to incorrect answers, with no age-related differences in the magnitude of this difference, suggesting robust metacognitive monitoring skills in children as young as 5. Furthermore, a link between JOLs and study time was found in the 6- and 7-year-olds, such that children spent more time studying items with low JOLs compared to items with high JOLs. Also, 6- and 7-year-olds but not 5-year-olds spent more time studying difficult items compared to easier items. Moreover, age-related improvements were found in children's use of CJs to guide their selection of answers: although children as young as 5 placed their most confident answers in the treasure chest and least confident answers in the trash can, this pattern was more robust in older children. Overall, results support the view that some metacognitive judgments may be acted upon with greater ease than others among young children

    Analytic reproducibility in articles receiving open data badges at the journal Psychological Science:An observational study

    Get PDF
    For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one ‘major numerical discrepancy' (>10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility

    Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: An observational study

    No full text
    For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one ‘major numerical discrepancy' (>10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility. Funding T.E.H.'s contribution was enabled by a general support grant awarded to the Meta-Research Innovation Center at Stanford (METRICS) from the Laura and John Arnold Foundation and a grant from the Einstein Foundation and Stiftung Charité awarded to the Meta-Research Innovation Center Berlin (METRIC-B)

    Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: An observational study

    No full text
    For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one ‘major numerical discrepancy' (>10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility. Funding T.E.H.'s contribution was enabled by a general support grant awarded to the Meta-Research Innovation Center at Stanford (METRICS) from the Laura and John Arnold Foundation and a grant from the Einstein Foundation and Stiftung Charité awarded to the Meta-Research Innovation Center Berlin (METRIC-B)
    corecore