1,468 research outputs found

    Inferior Frontal Cortex Contributions to the Recognition of Spoken Words and Their Constituent Speech Sounds

    Get PDF
    Speech perception and comprehension are often challenged by the need to recognize speech sounds that are degraded or ambiguous. Here, we explore the cognitive and neural mechanisms involved in resolving ambiguity in the identity of speech sounds using syllables that contain ambiguous phonetic segments (e.g., intermediate sounds between /b/ and /g/ as in "blade" and "glade"). We used an audio-morphing procedure to create a large set of natural sounding minimal pairs that contain phonetically ambiguous onset or offset consonants (differing in place, manner, or voicing). These ambiguous segments occurred in different lexical contexts (i.e., in words or pseudowords, such as blade-glade or blem-glem) and in different phonological environments (i.e., with neighboring syllables that differed in lexical status, such as blouse-glouse). These stimuli allowed us to explore the impact of phonetic ambiguity on the speed and accuracy of lexical decision responses (Experiment 1), semantic categorization responses (Experiment 2), and the magnitude of BOLD fMRI responses during attentive comprehension (Experiment 3). For both behavioral and neural measures, observed effects of phonetic ambiguity were influenced by lexical context leading to slower responses and increased activity in the left inferior frontal gyrus for high-ambiguity syllables that distinguish pairs of words, but not for equivalent pseudowords. These findings suggest lexical involvement in the resolution of phonetic ambiguity. Implications for speech perception and the role of inferior frontal regions are discusse

    Neurobiological systems for lexical representation and analysis in English.

    Get PDF
    Current research suggests that language comprehension engages two joint but functionally distinguishable neurobiological processes: a distributed bilateral system, which supports general perceptual and interpretative processes underpinning speech comprehension, and a left hemisphere (LH) frontotemporal system, selectively tuned to the processing of combinatorial grammatical sequences, such as regularly inflected verbs in English [Marslen-Wilson, W. D., & Tyler, L. K. Morphology, language and the brain: The decompositional substrate for language comprehension. Philosophical Transactions of the Royal Society: Biological Sciences, 362, 823-836, 2007]. Here we investigated how English derivationally complex words engage these systems, asking whether they selectively activate the LH system in the same way as inflections or whether they primarily engage the bilateral system that support nondecompositional access. In an fMRI study, we saw no evidence for selective activation of the LH frontotemporal system, even for highly transparent forms like bravely. Instead, a combination of univariate and multivariate analyses revealed the engagement of a distributed bilateral system, modulated by factors of perceptual complexity and semantic transparency. We discuss the implications for theories of the processing and representation of English derivational morphology and highlight the importance of neurobiological constraints in understanding these processes

    Neural substrates of subphonemic variation and lexical competition in spoken word recognition

    Get PDF
    In spoken word recognition, subphonemic variation influences lexical activation, with sounds near a category boundary increasing phonetic competition as well as lexical competition. The current study investigated the interplay of these factors using a visual world task in which participants were instructed to look at a picture of an auditory target (e.g. peacock). Eyetracking data indicated that participants were slowed when a voiced onset competitor (e.g. beaker) was also displayed, and this effect was amplified when acoustic-phonetic competition was increased. Simultaneously-collected fMRI data showed that several brain regions were sensitive to the presence of the onset competitor, including the supramarginal, middle temporal, and inferior frontal gyri, and functional connectivity analyses revealed that the coordinated activity of left frontal regions depends on both acoustic-phonetic and lexical factors. Taken together, results suggest a role for frontal brain structures in resolving lexical competition, particularly as atypical acoustic-phonetic information maps on to the lexicon.Research was supported by National Institutes of Health (NIH) [grant number: R01 DC013064] to EBM and NIH NIDCD [grant number R01 DC006220] to SEB. SG was supported by the Spanish Ministry of Economy and Competitiveness through the Severo Ochoa Programme for Centres/Units of Excellence in R&D [SEV‐2015‐490]. The contents of this paper reflect the views of the authors and not those of the funding agencies

    Written sentence context effects on acoustic-phonetic perception: fMRI reveals cross-modal semantic-perceptual interactions

    Get PDF
    Available online 3 October 2019.This study examines cross-modality effects of a semantically-biased written sentence context on the perception of an acoustically-ambiguous word target identifying neural areas sensitive to interactions between sentential bias and phonetic ambiguity. Of interest is whether the locus or nature of the interactions resembles those previously demonstrated for auditory-only effects. FMRI results show significant interaction effects in right mid-middle temporal gyrus (RmMTG) and bilateral anterior superior temporal gyri (aSTG), regions along the ventral language comprehension stream that map sound onto meaning. These regions are more anterior than those previously identified for auditory-only effects; however, the same cross-over interaction pattern emerged implying similar underlying computations at play. The findings suggest that the mechanisms that integrate information across modality and across sentence and phonetic levels of processing recruit amodal areas where reading and spoken lexical and semantic access converge. Taken together, results support interactive accounts of speech and language processing.This work was supported in part by the National Institutes of Health, NIDCD grant RO1 DC006220

    Semantic radical consistency and character transparency effects in Chinese: an ERP study

    Get PDF
    BACKGROUND: This event-related potential (ERP) study aims to investigate the representation and temporal dynamics of Chinese orthography-to-semantics mappings by simultaneously manipulating character transparency and semantic radical consistency. Character components, referred to as radicals, make up the building blocks used dur...postprin

    Neural representations used by brain regions underlying speech production

    Full text link
    Thesis (Ph.D.)--Boston UniversitySpeech utterances are phoneme sequences but may not always be represented as such in the brain. For instance, electropalatography evidence indicates that as speaking rate increases, gestures within syllables are manipulated separately but those within consonant clusters act as one motor unit. Moreover, speech error data suggest that a syllable's phonological content is, at some stage, represented separately from its syllabic frame structure. These observations indicate that speech is neurally represented in multiple forms. This dissertation describes three studies exploring representations of speech used in different brain regions to produce speech. The first study investigated the motor units used to learn novel speech sequences. Subjects learned to produce a set of sequences with illegal consonant clusters (e.g. GVAZF) faster and more accurately than a similar novel set. Subjects then produced novel sequences that retained varying phonemic subsequences of previously learned sequences. Novel sequences were performed as quickly and accurately as learned sequences if they contained no novel consonant clusters, regardless of other phonemic content, implicating consonant clusters as important speech motor representations. The second study investigated the neural correlates of speech motor sequence learning. Functional magnetic resonance imaging (fMRI) revealed increased activity during novel sequence productions in brain regions traditionally associated with non-speech motor sequence learning - including the basal ganglia and premotor cortex - as well as regions associated with learning and updating speech motor representations based on sensory input - including the bilateral frontal operculum and left posterior superior temporal sulcus (pSTs). Behavioral learning measures correlated with increased response for novel sequences in the frontal operculum and with white matter integrity under the pSTs, implicating functional and structural connectivity of these regions in learning success

    Affective iconic words benefit from additional sound–meaning integration in the left amygdala

    Get PDF
    Recent studies have shown that a similarity between sound and meaning of a word (i.e., iconicity) can help more readily access the meaning of that word, but the neural mechanisms underlying this beneficial role of iconicity in semantic processing remain largely unknown. In an fMRI study, we focused on the affective domain and examined whether affective iconic words (e.g., high arousal in both sound and meaning) activate additional brain regions that integrate emotional information from different domains (i.e., sound and meaning). In line with our hypothesis, affective iconic words, compared to their non‐iconic counterparts, elicited additional BOLD responses in the left amygdala known for its role in multimodal representation of emotions. Functional connectivity analyses revealed that the observed amygdalar activity was modulated by an interaction of iconic condition and activations in two hubs representative for processing sound (left superior temporal gyrus) and meaning (left inferior frontal gyrus) of words. These results provide a neural explanation for the facilitative role of iconicity in language processing and indicate that language users are sensitive to the interaction between sound and meaning aspect of words, suggesting the existence of iconicity as a general property of human language

    Multimodal imaging of language perception

    Get PDF
    This Thesis draws together several lines of research by examining language perception in the same individuals using three neuroimaging methods: magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and electroencephalography (EEG). The MEG experiments conducted in this Thesis demonstrated that neural processing of written and spoken words converges to the superior temporal cortex following initial modality-specific analysis. In both reading and speech perception, the superior temporal cortex is involved in processing word meaning at ∼250-450 ms in the left hemisphere and after ∼450 ms bilaterally. The data thus support a view of a largely shared semantic system in auditory and visual language perception, in line with the assumption that reading acquisition makes use of the neural systems originally developed for speech perception during evolution and in individual language development. The MEG experiments on reading morphologically complex words showed that the left superior temporal activation was enhanced for the morphologically complex words at ∼200-700 ms. The results suggest that the majority of inflected words in the highly inflected Finnish language are represented in a decomposed form and that the decomposition process requires additional neural resources. Only very high-frequency inflected words may acquire full-form representations. The MEG results on parafoveal preview in reading indicated that neural processing of written words in the left hemisphere is affected by a preview of words in the right visual field. The underlying neural mechanism may facilitate reading of connected text in natural conditions. In a direct comparison, MEG and fMRI showed diverging activation patterns in a reading task although the same individuals were performing the same task. Based on the similarity of the EEG responses recorded simultaneously with both MEG and fMRI, the participants were performing the task similarly during the two recordings. The divergent MEG and fMRI results cannot be attributed to differences in the experimental procedures or language since these factors were controlled. Rather, they are likely to reflect actual dissimilarities in the way neural activity in a high-level cognitive task is picked up by MEG evoked responses and fMRI signals

    Emotional Prosody Processing in the Schizophrenia Spectrum.

    Get PDF
    THESIS ABSTRACT Emotional prosody processing impairment is proposed to be a main contributing factor for the formation of auditory verbal hallucinations in patients with schizophrenia. In order to evaluate such assumption, five experiments in healthy, highly schizotypal and schizophrenia populations are presented. The first part of the thesis seeks to reveal the neural underpinnings of emotional prosody comprehension (EPC) in a non-clinical population as well as the modulation of prosodic abilities by hallucination traits. By revealing the brain representation of EPC, an overlap at the neural level between EPC and auditory verbal hallucinations (AVH) was strongly suggested. By assessing the influence of hallucinatory traits on EPC abilities, a continuum in the schizophrenia spectrum in which high schizotypal population mirrors the neurocognitive profile of schizophrenia patients was established. Moreover, by studying the relation between AVH and EPC in non-clinical population, potential confounding effects of medication influencing the findings were minimized. The second part of the thesis assessed two EPC related abilities in schizophrenia patients with and without hallucinations. Firstly, voice identity recognition, a skill which relies on the analysis of some of the same acoustical features as EPC, has been evaluated in patients and controls. Finally, the last study presented in the current thesis, assessed the influence that implicit processing of emotional prosody has on selective attention in patients and controls. Both patients studies demonstrate that voice identity recognition deficits as well as abnormal modulation of selective attention by implicit emotion prosody are related to hallucinations exclusively and not to schizophrenia in general. In the final discussion, a model in which EPC deficits are a crucial factor in the formation of AVH is evaluated. Experimental findings presented in the previous chapters strongly suggests that the perception of prosodic features is impaired in patients with AVH, resulting in aberrant perception of irrelevant auditory objects with emotional prosody salience which captures the attention of the hearer and which sources (speaker identity) cannot be recognized. Such impairments may be due to structural and functional abnormalities in a network which comprises the superior temporal gyrus as a central element
    corecore