16 research outputs found
Group characteristics of musical experience in adult musicians and musically trained children.
<p>Group characteristics of musical experience in adult musicians and musically trained children.</p
Whole brain activation during task-switching (bivalent switches and reconfigurations > univalent switches) in (A) musically trained (p<0.05 corrected), (B) musically untrained (p = 0.05 corrected), and (C) two-sample comparison of musically trained over untrained children (p = 0.005 uncorrected).
<p>Note: activation is displayed with the FSL radiological convention.</p
Cross-modal shifting task (fMRI).
<p>In each trial a cue [arrow; circle; or triangle] representing a rule was followed by a sound. Children responded with a left or right button press (arrow: horse  =  right; dog  =  left; circle: frog = right; bird = left; triangle: bird = right; frog = left). Critically, in one instance the rule consistently maps to single auditory stimuli (univalent rule) while in the latter two the auditory stimulus-response relationship changes with the visual cue (bivalent rules).</p
Whole brain activation during rule representation (all bivalent > all univalent rule trials) in (A) musically trained (p<0.05 corrected), (B) musically untrained (p = 0.05 corrected), and (C) two-sample comparison of musically trained over untrained children (p = 0.005 uncorrected).
<p>Note: activation is displayed with the FSL radiological convention.</p
Whole brain activation for musically trained and untrained children separately (one sample t-test) and two-sample t-test comparison (musically trained > untrained) during rule representation (contrast: all bivalent > all univalent rule trials).
<p>Coordinates in MNI space, gray matter activations significant at p<0.05 with a cluster threshold >50 voxels for musically trained and untrained groups separately; p<0.005 uncorrected threshold for the two-sample t-test.</p
Whole brain activation for musically trained and untrained children separately (one sample t-test) and two-sample t-test comparison (musically trained > untrained) during task-switching (contrast: bivalent switches and reconfigurations > univalent switches).
<p>Coordinates in MNI space, gray matter activations significant at p<0.05 with a cluster threshold >50 voxels for musically trained and untrained groups separately; p<0.005 uncorrected threshold for the two-sample t-test.</p
Mean contrast of parameter estimates (COPE) values extracted from the ROI analyses of musically trained compared with untrained children in bilateral SMA (BiSMA) and right VLPFC (RVLPFC) (* indicates significant at the p<0.05 threshold; for the rule representation contrast (bivalent > univalent rule trials)).
<p>Mean contrast of parameter estimates (COPE) values extracted from the ROI analyses of musically trained compared with untrained children in bilateral SMA (BiSMA) and right VLPFC (RVLPFC) (* indicates significant at the p<0.05 threshold; for the rule representation contrast (bivalent > univalent rule trials)).</p
sj-docx-1-ldx-10.1177_00222194231157722 – Supplemental material for Sequence Processing in Music Predicts Reading Skills in Young Readers: A Longitudinal Study
Supplemental material, sj-docx-1-ldx-10.1177_00222194231157722 for Sequence Processing in Music Predicts Reading Skills in Young Readers: A Longitudinal Study by Paulo E. Andrade, Daniel Müllensiefen, Olga V. C. A. Andrade, Jade Dunstan, Jennifer Zuk and Nadine Gaab in Journal of Learning Disabilities</p
Weighted mean parameter estimates in right and left STS during voice or speech-sound directed information processing.
<p>Weighted mean parameter estimates extracted from regions of interest (<i>in blue</i>: right and <i>in orange</i>: left anterior STS) when focusing on speaker voice (‘VM>Rest’), speech sounds (‘FSM>Rest’), when focusing more on initial speech sounds of spoken object words than speaker voice (‘FSM>VM’) and when focusing more on speaker voice than the initial speech sounds of spoken object words (‘VM>FSM’; significant activation difference in right compared to left anterior STS with p = 0.036). Weighted mean parameter estimates as extracted from right (rSTS) and left anterior STS (lSTS) regions of interest are summarized below the bar graphic.</p
Neuronal activation patterns during voice or speech-sound directed information processing.
<p>Cerebral regions activated when attending to (<b>A</b>) speakers voice (‘VM> Rest’) or (<b>B</b>) speech sounds (‘FSM> Rest’). Brain regions activated when attending more on speech sounds of spoken object words than speaker voice (<b>C</b>; ‘VM D; ‘VM> FSM’ (p<0.005; k = 10).</p