53 research outputs found

    Recognizing Speech in a Novel Accent: The Motor Theory of Speech Perception Reframed

    Get PDF
    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory

    Large-Scale Cortical Functional Organization and Speech Perception across the Lifespan

    Get PDF
    Aging is accompanied by substantial changes in brain function, including functional reorganization of large-scale brain networks. Such differences in network architecture have been reported both at rest and during cognitive task performance, but an open question is whether these age-related differences show task-dependent effects or represent only task-independent changes attributable to a common factor (i.e., underlying physiological decline). To address this question, we used graph theoretic analysis to construct weighted cortical functional networks from hemodynamic (functional MRI) responses in 12 younger and 12 older adults during a speech perception task performed in both quiet and noisy listening conditions. Functional networks were constructed for each subject and listening condition based on inter-regional correlations of the fMRI signal among 66 cortical regions, and network measures of global and local efficiency were computed. Across listening conditions, older adult networks showed significantly decreased global (but not local) efficiency relative to younger adults after normalizing measures to surrogate random networks. Although listening condition produced no main effects on whole-cortex network organization, a significant age group x listening condition interaction was observed. Additionally, an exploratory analysis of regional effects uncovered age-related declines in both global and local efficiency concentrated exclusively in auditory areas (bilateral superior and middle temporal cortex), further suggestive of specificity to the speech perception tasks. Global efficiency also correlated positively with mean cortical thickness across all subjects, establishing gross cortical atrophy as a task-independent contributor to age-related differences in functional organization. Together, our findings provide evidence of age-related disruptions in cortical functional network organization during speech perception tasks, and suggest that although task-independent effects such as cortical atrophy clearly underlie age-related changes in cortical functional organization, age-related differences also demonstrate sensitivity to task domains

    Sensory and cognitive mechanisms of change detection in the context of speech

    Get PDF
    The aim of this study was to dissociate the contributions of memory-based (cognitive) and adaptation-based (sensory) mechanisms underlying deviance detection in the context of natural speech. Twenty healthy right-handed native speakers of English participated in an event-related design scan in which natural speech stimuli, /de:/ (“deh”) and /deI/ (“day”); (/te:/ (“teh”) and /teI/ (“tay”) served as standards and deviants within functional magnetic resonance imaging event-related “oddball” paradigm designed to elicit the mismatch negativity component. Thus, “oddball” blocks could involve either a word deviant (“day”) resulting in a “word advantage” effect, or a non-word deviant (“deh” or “tay”). We utilized an experimental protocol controlling for refractoriness similar to that used previously when deviance detection was studied in the context of tones. Results showed that the cognitive and sensory mechanisms of deviance detection were located in the anterior and posterior auditory cortices, respectively, as was previously found in the context of tones. The cognitive effect, that was most robust for the word deviant, diminished in the “oddball” condition. In addition, the results indicated that the lexical status of the speech stimulus interacts with acoustic factors exerting a top-down modulation of the extent to which novel sounds gain access to the subject’s awareness through memory-based processes. Thus, the more salient the deviant stimulus is the more likely it is to be released from the effects of adaptation exerted by the posterior auditory cortex

    Phonemes:Lexical access and beyond

    Get PDF

    Categorical speech representation in human superior temporal gyrus.

    Get PDF
    Speech perception requires the rapid and effortless extraction of meaningful phonetic information from a highly variable acoustic signal. A powerful example of this phenomenon is categorical speech perception, in which a continuum of acoustically varying sounds is transformed into perceptually distinct phoneme categories. We found that the neural representation of speech sounds is categorically organized in the human posterior superior temporal gyrus. Using intracranial high-density cortical surface arrays, we found that listening to synthesized speech stimuli varying in small and acoustically equal steps evoked distinct and invariant cortical population response patterns that were organized by their sensitivities to critical acoustic features. Phonetic category boundaries were similar between neurometric and psychometric functions. Although speech-sound responses were distributed, spatially discrete cortical loci were found to underlie specific phonetic discrimination. Our results provide direct evidence for acoustic-to-higher order phonetic level encoding of speech sounds in human language receptive cortex
    corecore