11 research outputs found
Early metacognitive abilities: The interplay of monitoring and control processes in 5- to 7-year-old children
The goal of the current investigation was to compare two monitoring processes (judgments of learning [JOLs] and confidence judgments [CJs]) and their corresponding control processes (allocation of study time and selection of answers to maximize accuracy, respectively) in 5- to 7-year-old children (N=101). Children learned the meaning of Japanese characters and provided JOLs after a study phase and CJs after a memory test. They were given the opportunity to control their learning in self-paced study phases, and to control their accuracy by placing correct answers into a treasure chest and incorrect answers into a trash can. All three age groups gave significantly higher CJs for correct compared to incorrect answers, with no age-related differences in the magnitude of this difference, suggesting robust metacognitive monitoring skills in children as young as 5. Furthermore, a link between JOLs and study time was found in the 6- and 7-year-olds, such that children spent more time
studying items with low JOLs compared to items with high JOLs. Also, 6- and 7-year-olds but not 5-year-olds spent more time studying difficult items compared to easier items. Moreover, age-related improvements were found in children's use of CJs to guide their selection of answers: although children as young as 5 placed their most confident answers in the treasure chest and least confident answers in the trash can, this pattern was more robust in older children. Overall, results support the view that some metacognitive judgments may be acted upon with greater ease than others among
young children
Abnormalities of Visual Processing and Frontostriatal Systems in Body Dysmorphic Disorder
Recommended from our members
Effects of cranial electrotherapy stimulation on resting state brain activity.
Cranial electrotherapy stimulation (CES) is a U.S. Food and Drug Administration (FDA)-approved treatment for insomnia, depression, and anxiety consisting of pulsed, low-intensity current applied to the earlobes or scalp. Despite empirical evidence of clinical efficacy, its mechanism of action is largely unknown. The goal was to characterize the acute effects of CES on resting state brain activity. Our primary hypothesis was that CES would result in deactivation in cortical and subcortical regions. Eleven healthy controls were administered CES applied to the earlobes at subsensory thresholds while being scanned with functional magnetic resonance imaging in the resting state. We tested 0.5- and 100-Hz stimulation, using blocks of 22 sec "on" alternating with 22 sec of baseline (device was "off"). The primary outcome measure was differences in blood oxygen level dependent data associated with the device being on versus baseline. The secondary outcome measures were the effects of stimulation on connectivity within the default mode, sensorimotor, and fronto-parietal networks. Both 0.5- and 100-Hz stimulation resulted in significant deactivation in midline frontal and parietal regions. 100-Hz stimulation was associated with both increases and decreases in connectivity within the default mode network (DMN). Results suggest that CES causes cortical brain deactivation, with a similar pattern for high- and low-frequency stimulation, and alters connectivity in the DMN. These effects may result from interference from high- or low-frequency noise. Small perturbations of brain oscillations may therefore have significant effects on normal resting state brain activity. These results provide insight into the mechanism of action of CES, and may assist in the future development of optimal parameters for effective treatment
Analytic reproducibility in articles receiving open data badges at the journal Psychological Science:An observational study
For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one ‘major numerical discrepancy' (>10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility
Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: An observational study
For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one ‘major numerical discrepancy' (>10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility. Funding T.E.H.'s contribution was enabled by a general support grant awarded to the Meta-Research Innovation Center at Stanford (METRICS) from the Laura and John Arnold Foundation and a grant from the Einstein Foundation and Stiftung Charité awarded to the Meta-Research Innovation Center Berlin (METRIC-B)
Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: An observational study
For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one ‘major numerical discrepancy' (>10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility. Funding T.E.H.'s contribution was enabled by a general support grant awarded to the Meta-Research Innovation Center at Stanford (METRICS) from the Laura and John Arnold Foundation and a grant from the Einstein Foundation and Stiftung Charité awarded to the Meta-Research Innovation Center Berlin (METRIC-B)