8 research outputs found

    Functional and spatial segregation within the inferior frontal and superior temporal cortices during listening, articulation imagery, and production of vowels

    Get PDF
    Abstract Classical models of language localize speech perception in the left superior temporal and production in the inferior frontal cortex. Nonetheless, neuropsychological, structural and functional studies have questioned such subdivision, suggesting an interwoven organization of the speech function within these cortices. We tested whether sub-regions within frontal and temporal speech-related areas retain specific phonological representations during both perception and production. Using functional magnetic resonance imaging and multivoxel pattern analysis, we showed functional and spatial segregation across the left fronto-temporal cortex during listening, imagery and production of vowels. In accordance with classical models of language and evidence from functional studies, the inferior frontal and superior temporal cortices discriminated among perceived and produced vowels respectively, also engaging in the non-classical, alternative function – i.e. perception in the inferior frontal and production in the superior temporal cortex. Crucially, though, contiguous and non-overlapping sub-regions within these hubs performed either the classical or non-classical function, the latter also representing non-linguistic sounds (i.e., pure tones). Extending previous results and in line with integration theories, our findings not only demonstrate that sensitivity to speech listening exists in production-related regions and vice versa, but they also suggest that the nature of such interwoven organisation is built upon low-level perception

    Functional and spatial segregation within the inferior frontal and superior temporal cortices during listening, articulation imagery, and production of vowels

    Get PDF
    Classical models of language localize speech perception in the left superior temporal and production in the inferior frontal cortex. Nonetheless, neuropsychological, structural and functional studies have questioned such subdivision, suggesting an interwoven organization of the speech function within these cortices. We tested whether sub-regions within frontal and temporal speech-related areas retain specific phonological representations during both perception and production. Using functional magnetic resonance imaging and multivoxel pattern analysis, we showed functional and spatial segregation across the left fronto-temporal cortex during listening, imagery and production of vowels. In accordance with classical models of language and evidence from functional studies, the inferior frontal and superior temporal cortices discriminated among perceived and produced vowels respectively, also engaging in the non-classical, alternative function - i.e. perception in the inferior frontal and production in the superior temporal cortex. Crucially, though, contiguous and non-overlapping sub-regions within these hubs performed either the classical or non-classical function, the latter also representing non-linguistic sounds (i.e., pure tones). Extending previous results and in line with integration theories, our findings not only demonstrate that sensitivity to speech listening exists in production-related regions and vice versa, but they also suggest that the nature of such interwoven organisation is built upon low-level perception

    Multivariate pattern analysis of input and output representations of speech

    Get PDF
    Repeating a word or nonword requires a speaker to map auditory representations of incoming sounds onto learned speech items, maintain those items in short-term memory, interface that representation with the motor output system, and articulate the target sounds. This dissertation seeks to clarify the nature and neuroanatomical localization of speech sound representations in perception and production through multivariate analysis of neuroimaging data. The major portion of this dissertation describes two experiments using functional magnetic resonance imaging (fMRI) to measure responses to the perception and overt production of syllables and multivariate pattern analysis to localize brain areas containing associated phonological/phonetic information. The first experiment used a delayed repetition task to permit response estimation for auditory syllable presentation (input) and overt production (output) in individual trials. In input responses, clusters sensitive to vowel identity were found in left inferior frontal sulcus (IFs), while clusters responsive to syllable identity were found in left ventral premotor cortex and left mid superior temporal sulcus (STs). Output-linked responses revealed clusters of vowel information bilaterally in mid/posterior STs. The second experiment was designed to dissociate the phonological content of the auditory stimulus and vocal target. Subjects were visually presented with two (non)word syllables simultaneously, then aurally presented with one of the syllables. A visual cue informed subjects either to repeat the heard syllable (repeat trials) or produce the unheard, visually presented syllable (change trials). Results suggest both IFs and STs represent heard syllables; on change trials, representations in frontal areas, but not STs, are updated to reflect the vocal target. Vowel identity covaries with formant frequencies, inviting the question of whether lower-level, auditory representations can support vowel classification in fMRI. The final portion of this work describes a simulation study, in which artificial fMRI datasets were constructed to mimic the overall design of Experiment 1 with voxels assumed to contain either discrete (categorical) or analog (frequency-based) vowel representations. The accuracy of classification models was characterized by type of representation and the density and strength of responsive voxels. It was shown that classification is more sensitive to sparse, discrete representations than dense analog representations

    Formant Space Reconstruction From Brain Activity in Frontal and Temporal Regions Coding for Heard Vowels

    Get PDF
    Classical studies have isolated a distributed network of temporal and frontal areas engaged in the neural representation of speech perception and production. With modern literature arguing against unique roles for these cortical regions, different theories have favored either neural code-sharing or cortical space-sharing, thus trying to explain the intertwined spatial and functional organization of motor and acoustic components across the fronto-temporal cortical network. In this context, the focus of attention has recently shifted toward specific model fitting, aimed at motor and/or acoustic space reconstruction in brain activity within the language network. Here, we tested a model based on acoustic properties (formants), and one based on motor properties (articulation parameters), where model-free decoding of evoked fMRI activity during perception, imagery, and production of vowels had been successful. Results revealed that phonological information organizes around formant structure during the perception of vowels; interestingly, such a model was reconstructed in a broad temporal region, outside of the primary auditory cortex, but also in the pars triangularis of the left inferior frontal gyrus. Conversely, articulatory features were not associated with brain activity in these regions. Overall, our results call for a degree of interdependence based on acoustic information, between the frontal and temporal ends of the language network

    Vocal Tract Images Reveal Neural Representations of Sensorimotor Transformation During Speech Imitation

    Get PDF
    Imitating speech necessitates the transformation from sensory targets to vocal tract motor output, yet little is known about the representational basis of this process in the human brain. Here, we address this question by using real-time MR imaging (rtMRI) of the vocal tract and functional MRI (fMRI) of the brain in a speech imitation paradigm. Participants trained on imitating a native vowel and a similar nonnative vowel that required lip rounding. Later, participants imitated these vowels and an untrained vowel pair during separate fMRI and rtMRI runs. Univariate fMRI analyses revealed that regions including left inferior frontal gyrus were more active during sensorimotor transformation (ST) and production of nonnative vowels, compared with native vowels; further, ST for nonnative vowels activated somatomotor cortex bilaterally, compared with ST of native vowels. Using test representational similarity analysis (RSA) models constructed from participants' vocal tract images and from stimulus formant distances, we found that RSA searchlight analyses of fMRI data showed either type of model could be represented in somatomotor, temporal, cerebellar, and hippocampal neural activation patterns during ST. We thus provide the first evidence of widespread and robust cortical and subcortical neural representation of vocal tract and/or formant parameters, during prearticulatory ST

    Mechanisms of auditory signal decoding in the progressive aphasias

    Get PDF
    The primary progressive aphasias (PPA) are a diverse group of neurodegenerative disorders that selectively target brain networks mediating language. The pathophysiology of PPA remains poorly understood, but emerging evidence suggests that deficits in auditory processing accompany and may precede language symptoms in these patients. In four studies, I have probed the pathophysiology of auditory signal decoding in patient cohorts representing all major PPA syndromes – nonfluent variant PPA (nfvPPA), semantic variant PPA (svPPA), and logopenic variant PPA (lvPPA) – in relation to healthy age-matched controls. In my first experiment, I presented sequences of spoken syllables manipulated for temporal regularity, spectrotemporal structure and entropy. I used voxel-based morphometry to define critical brain substrates for the processing of these attributes, identifying correlates of behavioural performance within a cortico-subcortical network extending beyond canonical language areas. In my second experiment, I used activation functional magnetic resonance imaging (fMRI) with the same stimuli. I identified network signatures of particular signal attributes: nfvPPA was associated with reduced activity in anterior cingulate for processing temporal irregularity; lvPPA with reduced activation of posterior superior temporal cortex for processing spectrotemporal structure; and svPPA with reduced activation of caudate and anterior cingulate for processing signal entropy. In my third experiment, I manipulated the auditory feedback via which participants heard their own voices during speech production. Healthy control participants spoke significantly less fluently under delayed auditory feedback, but patients with nfvPPA and lvPPA were affected significantly less. In my final experiment, I probed residual capacity for dynamic auditory signal processing and perceptual learning in PPA, using sinewave speech. Patients with nfvPPA and lvPPA showed severely attenuated learning to the degraded stimuli, while patients with svPPA showed intact early perceptual processing, but deficient integration of semantic knowledge. Together, these experiments represent the most concerted and comprehensive attempt to date to define the pathophysiology of auditory signal decoding in PPA

    Using non-invasive stimulation of the undamaged brain to guide the identification of lesion sites that predict language outcome after stroke

    Get PDF
    Disrupting the neural activity in the left anterior supramarginal gyrus (SMG) or opercular part of the left inferior frontal gyrus (pOp) with repetitive transcranial magnetic stimulation (TMS) has been demonstrated to cause a transient slowing of response times during phonologically more than semantically demanding tasks. Likewise, a wealth of functional magnetic resonance imaging (fMRI) studies have shown increased activation in SMG and/or pOp for phonological relative to semantic processing. Here I set out to investigate whether, and how frequently, stroke damage to SMG and/or pOp results in persistent phonological processing impairments in a large sample of 262 right-handed English-speaking adults, who were tested at least 1 year after a left-hemisphere stroke. In Experiment I, I compared the effect of damage to different parts of SMG and pOp that were defined by regions of interest from either TMS or fMRI studies of phonological processing in neurologically-normal individuals. I found that the incidence of phonological processing impairments was predicted significantly better by the presence or absence of damage to SMG and pOp regions defined by TMS studies than SMG and pOp regions defined by fMRI studies. Moreover, the discriminatory power (for segregating patients with and without phonological abilities) of the TMS sites was not improved further when combined with the fMRI sites. In Experiment II, I adapted the borders of the TMS SMG and pOp regions to include the surrounding grey and white matter where the presence or absence of stroke damage was consistently associated with the presence or absence of phonological processing impairments. The presence or absence of damage to these new TMS-guided regions was able to explain the incidence of phonological impairments better than the original TMS regions, even in a new sample of patients that was entirely independent of the region identification process. In Experiment III, I showed that damage to the TMS-guided regions accounted for the incidence of phonological impairments substantially better than damage to an alternative set of regions derived from voxel-based lesion-deficit mapping techniques that search the whole brain for areas that are most frequently damaged in those with phonological impairments. However, the best classification accuracy was observed when the analysis took into account a combination of regions from TMS-guided and voxel-based lesion-deficit mapping approaches. In Experiment IV, I investigated the nature of the functional impairment caused by SMG or pOp lesions and found that damage to either region impaired covert and overt phonological processing abilities more than semantic processing abilities, as predicted by prior TMS and fMRI studies of neurologically-normal subjects. Finally, the behavioural effects of damage were remarkably similar (i.e. no statistically significant differences) for both TMS-guided sites (i.e. pOp and SMG). In conclusion, the fact that damage to the TMS-guided SMG and pOp regions impaired phonological processing abilities years after stroke onset, suggests that these regions are critical for accurate phonological processing (both overt and covert) and that other brain areas are not typically able to fully compensate for the contribution that these regions make to language processing. More broadly, the results illustrate how non-invasive stimulation of the undamaged brain can be used to guide the identification of regions where brain damage is likely to cause persistent behavioural effects. By combining these regions of interest with those derived from other lesion-deficit mapping approaches, I was not only able to explain the presence, but also the absence, of phonological processing impairments in a large cohort of patients

    The role of language and sensorimotor information in memory for concepts

    Get PDF
    The linguistic-simulation approach to conceptual representations has been investigated for some time, but the role of language and sensorimotor information in memory for objects and words, both short- and long-term, has not been examined in detail. In the present thesis, I look at the interplay of sensorimotor and linguistic information in conceptual knowledge and examine which aspects of concepts are represented in memory tasks. I also aim to establish the role of consciously accessing conceptual information in word recognition and memory. The thesis includes three self-contained papers which show that the conceptual system relies on linguistic or sensorimotor information according to task demands. In the paper in Chapter 4, I examined the linguistic bootstrapping hypothesis, which postulates that linguistic labels can serve as placeholders for complex sensorimotor representations. I tested the capacity of working memory for object concepts using an articulatory suppression task to block access to language. I found that working memory capacity for contextually related object concepts when relying on sensorimotor information is higher than the traditionally reported capacity of 3-4 for simple shapes or colours. Additionally, when linguistic labels are available to deputise for complex sensorimotor information, the capacity further increases by up to two object concepts. In Chapters 5 and 6, I examined the content of conceptual information stored in long-term memory, and the role of sensorimotor simulation and consciously available information in word recognition and word memory. The studies revealed that consciously generated imagery is not reliably measured, and moreover, it does not contribute to word recognition in a consistent manner. Some of the effects of imageability found in the literature can be explained or subsumed by sensorimotor information, which is not fully available through conscious awareness. However, conscious imagery may be a useful strategy to support word memory when trying to explicitly remember words. The thesis demonstrates that both linguistic labels and sensorimotor information contribute to memory for concepts. The way a concept is represented in different tasks varies depending on task demands. Linguistic information is used to circumvent resource capacity limits, while sensorimotor information guides behaviour by providing more detailed information about the meaning of concepts, and our previous experience with them
    corecore