84 research outputs found
Sensitivity of human auditory cortex to rapid frequency modulation revealed by multivariate representational similarity analysis.
Functional Magnetic Resonance Imaging (fMRI) was used to investigate the extent, magnitude, and pattern of brain activity in response to rapid frequency-modulated sounds. We examined this by manipulating the direction (rise vs. fall) and the rate (fast vs. slow) of the apparent pitch of iterated rippled noise (IRN) bursts. Acoustic parameters were selected to capture features used in phoneme contrasts, however the stimuli themselves were not perceived as speech per se. Participants were scanned as they passively listened to sounds in an event-related paradigm. Univariate analyses revealed a greater level and extent of activation in bilateral auditory cortex in response to frequency-modulated sweeps compared to steady-state sounds. This effect was stronger in the left hemisphere. However, no regions showed selectivity for either rate or direction of frequency modulation. In contrast, multivoxel pattern analysis (MVPA) revealed feature-specific encoding for direction of modulation in auditory cortex bilaterally. Moreover, this effect was strongest when analyses were restricted to anatomical regions lying outside Heschl\u27s gyrus. We found no support for feature-specific encoding of frequency modulation rate. Differential findings of modulation rate and direction of modulation are discussed with respect to their relevance to phonetic discrimination
Eyetracking of coarticulatory cue responses in children and adults
Prior work suggests listeners are sensitive to coarticulatory cues during spoken word recognition; however, little is known about how this ability develops in children. In the present study, children and adults listened to words containing congruent and incongruent coarticulatory cues while looking at a two-picture display. We manipulated the congruency of the auditory-coarticulatory information such that the initial phoneme of the auditory cue matched the target, or contained an incongruent initial phoneme that instead matched the distractor picture. Accordingly, we observed both slower rates of looks to the target and higher rates of looks to the distractor on incongruent trials, indicating that both children and adults were sensitive to coarticulatory congruency. These findings suggest that children maintain detailed phonological representations of words, and may use coarticulatory information to facilitate spoken word recognition
Specific language impairment in children: Phonology, semantics and the English past tense
ABSTRACT-Theories of specific language impairment (SLI) in children turn on whether this deficit stems from a grammarspecific impairment or a more general speech-processing deficit. This issue parallels a more general question in cognitive neuroscience concerning the brain bases of linguistic rules. This more general debate frequently focuses on past-tense verbs, specifically, whether regular verbs (bake-baked) are encoded as rules, and whether irregular forms (take-took) are processed differently. Children with SLI have difficulties with past tenses, so SLI could represent an impairment to rules. An alternative theory explains past-tense deficits in SLI as resulting from a phonological deficit. Evidence for this theory has been obtained from connectionist models of past-tense impairments and from behavioral studies of language-and reading-impaired children. The data suggest that SLI is not an impairment to linguistic rules, that past-tense impairments can be explained as resulting from a perceptual deficit, and that a single processing mechanism is ideally suited to account for these children's difficulties. KEYWORDS-specific language impairment; connectionism; English past tense; speech perception A key question in cognitive neuroscience concerns the neural mechanism by which humans encode the rules of language. The English past tense represents an interesting case of rulelike processes: Although regular patterns (bake-baked, step-stepped) appear to be rulelike, English also has a number of irregular forms (take-took, sleep-slept) that conflict with the rule that the past tense is formed by adding -ed to the present tense. Irregular forms are problematic to a rule-based approach because they call into question whether rules alone are sufficient for explaining linguistic phenomena, and whether a secondary mechanism is required for encoding these irregular forms. In 1986, Rumelhart and McClelland proposed a connectionist model in which both regular past tenses and exceptions were encoded within a single type of neural mechanism. The connectionist approach to cognitive neuroscience explains cognitive processes as arising fro
Learning unfamiliar words and perceiving non-native vowels in a second language: Insights from eye tracking
One of the challenges in second-language learning is learning unfamiliar word forms, especially when this involves novel phoneme contrasts. The present study examines how real-time processing of newly-learned words and phonemes in a second language is impacted by the structure of learning (discrimination training) and whether asking participants to complete the same task after a 16–21 h delay favours subsequent word recognition. Specifically, using a visual world eye tracking paradigm, we assessed how English listeners processed newly-learned words containing non-native French front-rounded [y] compared to native-sounding vowels, both immediately after training and the following day. Some learners were forced to discriminate between vowels that are perceptually similar for English listeners, [y]-[u], while others were not. We found significantly better word-level processing on a variety of indices after an overnight delay. We also found that training [y] words paired with [u] words (vs. [y]-Control pairs) led to a greater decrease in reaction times during the word recognition task over the two testing sessions. Discrimination training using perceptually similar sounds had facilitative effects on second language word learning with novel phonemic information, and real-time processing measures such as eyetracking provided valuable insights into how individuals learn words and phonemes in a second language
Language dominance modulates the perception of spanish approximants in late bilinguals
© 2020 by the authors. The ability to discriminate phonetically similar first language (L1) and second language (L2) sounds has significant consequences for achieving target-like proficiency in second-language learners. This study examines the L2 perception of Spanish approximants [Β, δ, γ] in comparison with their voiced stop counterparts [b, d, g] by adult English-Spanish bilinguals. Of interest is how perceptual effects are modulated by factors related to language dominance, including proficiency, language history, attitudes, and L1/L2 use, as measured by the Bilingual Language Profile questionnaire. Perception of target phones was assessed in adult native Spanish speakers (n = 10) and Spanish learners (n = 23) of varying proficiency levels, via (vowel-consonant-vowel) VCV sequences featuring both Spanish approximants and voiced stops during an AX discrimination task. Results indicate a significant positive correlation between perceptual accuracy and a language dominance score. Findings further demonstrate a significant hierarchy of increasing perceptual difficulty: Β \u3c δ \u3c γ. Through an examination of bilingual language dominance, composed of the combined effects of language history, use, proficiency, and attitudes, the present study contributes a more nuanced and complete examination of individual variables that affect L2 perception, reaching beyond proficiency and experience alone
Neural representations of phonology in temporal cortex scaffold longitudinal reading gains in 5- to 7-year-old children
© 2019 Elsevier Inc. The objective of this study was to investigate whether phonological processes measured through brain activation are crucial for the development of reading skill (i.e. scaffolding hypothesis) and/or whether learning to read words fine-tunes phonology in the brain (i.e. refinement hypothesis). We specifically looked at how different grain sizes in two brain regions implicated in phonological processing played a role in this bidirectional relation. According to the dual-stream model of speech processing and previous empirical studies, the posterior superior temporal gyrus (STG) appears to be a perceptual region associated with phonological representations, whereas the dorsal inferior frontal gyrus (IFG) appears to be an articulatory region that accesses phonological representations in STG during more difficult tasks. 36 children completed a reading test outside the scanner and an auditory phonological task which included both small (i.e. onset) and large (i.e. rhyme) grain size conditions inside the scanner when they were 5.5–6.5 years old (Time 1) and once again approximately 1.5 years later (Time 2). To study the scaffolding hypothesis, a regression analysis was carried out by entering brain activation in either STG or IFG for either small (onset \u3e perceptual) or large (rhyme \u3e perceptual) grain size phonological processing at T1 as the predictors and reading skill at T2 as the dependent measure, with several covariates of no interest included. To study the refinement hypothesis, the regression analysis included reading skill at T1 as the predictor and brain activation in either STG or IFG for either small or large grain size phonological processing at T2 as the dependent measures, with several covariates of no interest included. We found that only posterior STG, regardless of grain size, was predictive of reading gains. Parallel models with only behavioral accuracy were not significant. Taken together, our results suggest that the representational quality of phonology in temporal cortex is crucial for reading development. Moreover, our study provides neural evidence supporting the scaffolding hypothesis, suggesting that brain measures of phonology could be helpful in early identification of reading difficulties
Motor control and nonword repetition in specific working memory impairment and SLI
PURPOSE:: Debate around the underlying cognitive factors leading to poor performance in the repetition of nonwords by children with developmental impairments in language has centered around phonological short-term memory, lexical knowledge, and other factors. This study examines the impact of motor control demands on nonword repetition in groups of school children with specific impairments in language, working memory, or both. METHOD:: Children repeated two lists of nonwords matched for motoric complexity either without constraint or with a gummi bear bite block held between their teeth. The bite block required motoric compensation to reorganize the motor plan for speech production. RESULTS:: Overall, the effect of the biomechanical constraint was very small for all groups. When analyses focused only on the most complex nonwords, children with language impairment were found to be significantly more impaired in the motorically constrained nonword repetition task than the typically developing group. In contrast, working memory difficulties were not differentially linked to motor condition. CONCLUSIONS:: These findings add to the growing evidence that there is a motoric component to developmental language disorders. The results also suggest that the role of speech motor skill in nonword repetition is relatively modest. Copyright © 2013 Wolters Kluwer Health | Lippincott Williams & Wilkins
Letter fluency in 7-8-year-old children is related to the anterior, but not posterior, ventral occipito-temporal cortex during an auditory phonological task
Previous studies have shown that reading skill in 3- to 6-year-old children is related to the automatic activation of the posterior left ventral occipitotemporal cortex (vOT) during spoken language processing, whereas 8- to 15-year-old children and adult readers activate the anterior vOT. However, it is unknown how children who are between these two age groups automatically activate orthographic representations in vOT for spoken language. In the current study, we recruited 153 7- to 8-year-old children to fill the age gap from previous studies. Using functional magnetic resonance imaging (fMRI), we measured children\u27s reading-related skills and brain activity during an auditory phonological task with both a small (i.e. onset) and a large (i.e. rhyme) grain size condition. We found that letter fluency, but not reading accuracy, was correlated with activation in the anterior vOT for the rhyme condition. There were no reading-related skill correlations for the posterior vOT or for activation during the onset condition in this age group. Our findings reveal that automatic activation in the anterior vOT during spoken language processing already occurs in higher skilled 7- to 8-year-old children. In addition, increases in naming automaticity is the primary determinant of the engagement of vOT during phonological awareness tasks
Reading skill related to left ventral occipitotemporal cortex during a phonological awareness task in 5–6-year old children
The left ventral occipitotemporal cortex (vOT) is important in visual word recognition. Studies have shown that the left vOT is generally observed to be involved in spoken language processing in skilled readers, suggesting automatic access to corresponding orthographic information. However, little is known about where and how the left vOT is involved in the spoken language processing of young children with emerging reading ability. In order to answer this question, we examined the relation of reading ability in 5–6-year-old kindergarteners to the activation of vOT during an auditory phonological awareness task. Two experimental conditions: onset word pairs that shared the first phoneme and rhyme word pairs that shared the final biphone/triphone, were compared to allow a measurement of vOT\u27s activation to small (i.e., onsets) and large grain sizes (i.e., rhymes). We found that higher reading ability was associated with better accuracy of the onset, but not the rhyme, condition. In addition, higher reading ability was only associated with greater sensitivity in the posterior left vOT for the contrast of the onset versus rhyme condition. These results suggest that acquisition of reading results in greater specialization of the posterior vOT to smaller rather than larger grain sizes in young children
Sign language ability in young deaf signers predicts comprehension of written sentences in English.
We investigated the robust correlation between American Sign Language (ASL) and English reading ability in 51 young deaf signers ages 7;3 to 19;0. Signers were divided into \u27skilled\u27 and \u27less-skilled\u27 signer groups based on their performance on three measures of ASL. We next assessed reading comprehension of four English sentence structures (actives, passives, pronouns, reflexive pronouns) using a sentence-to-picture-matching task. Of interest was the extent to which ASL proficiency provided a foundation for lexical and syntactic processes of English. Skilled signers outperformed less-skilled signers overall. Error analyses further indicated greater single-word recognition difficulties in less-skilled signers marked by a higher rate of errors reflecting an inability to identify the actors and actions described in the sentence. Our findings provide evidence that increased ASL ability supports English sentence comprehension both at the levels of individual words and syntax. This is consistent with the theory that first language learning promotes second language through transference of linguistic elements irrespective of the transparency of mapping of grammatical structures between the two languages
- …