2,101 research outputs found
Recommended from our members
Relationships between human auditory cortical structure and function
The human auditory cortex comprises multiple areas, largely distributed across the supratemporal plane, but the precise number and configuration of auditory areas and their functional significance have not yet been clearly established. In this paper, we discuss recent research concerning architectonic and functional organisation within the human auditory cortex, as well as architectonic and neurophysiological studies in non-human species, which can provide a broad conceptual framework for interpreting functional specialisation in humans. We review the pattern in human auditory cortex of the functional responses to various acoustic cues, such as frequency, pitch, sound level, temporal variation, motion and spatial location, and we discuss their correspondence to what is known about the organisation of the auditory cortex in other primates. There is some neuroimaging evidence of multiple tonotopically organised fields in humans and of functional specialisations of the fields in the processing of different sound features. It is thought that the primary area, on Heschl's gyrus, may have a larger involvement in processing basic sound features, such as frequency and level, and that posterior non-primary areas on the planum temporale may play a larger role in processing more spectrotemporally complex sounds. Ways in which current knowledge of auditory cortical organisation and different data analysis approaches may benefit future functional neuroimaging studies which seek to link auditory cortical structure and function are discussed
In vivo functional and myeloarchitectonic mapping of human primary auditory areas
In contrast to vision, where retinotopic mapping alone can define areal borders, primary auditory areas such as A1 are best delineated by combining in vivo tonotopic mapping with postmortem cyto- or myeloarchitectonics from the same individual. We combined high-resolution (800 μm) quantitative T(1) mapping with phase-encoded tonotopic methods to map primary auditory areas (A1 and R) within the "auditory core" of human volunteers. We first quantitatively characterize the highly myelinated auditory core in terms of shape, area, cortical depth profile, and position, with our data showing considerable correspondence to postmortem myeloarchitectonic studies, both in cross-participant averages and in individuals. The core region contains two "mirror-image" tonotopic maps oriented along the same axis as observed in macaque and owl monkey. We suggest that these two maps within the core are the human analogs of primate auditory areas A1 and R. The core occupies a much smaller portion of tonotopically organized cortex on the superior temporal plane and gyrus than is generally supposed. The multimodal approach to defining the auditory core will facilitate investigations of structure-function relationships, comparative neuroanatomical studies, and promises new biomarkers for diagnosis and clinical studies
Speaker Normalization Using Cortical Strip Maps: A Neural Model for Steady State Vowel Identification
Auditory signals of speech are speaker-dependent, but representations of language meaning are speaker-independent. Such a transformation enables speech to be understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitchindependent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by Adaptive Resonance Theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [J. Acoust. Soc. Am. 24, 175-184 (1952)] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
Recommended from our members
Periodicity and frequency coding in human auditory cortex
Understanding the neural coding of pitch and frequency is fundamental to the understanding of speech comprehension, music perception and the segregation of concurrent sound sources. Neuroimaging has made important contributions to defining the pattern of frequency sensitivity in humans. However, the precise way in which pitch sensitivity relates to these frequency-dependent regions remains unclear. Single-frequency tones also cannot be used to test this hypothesis as their pitch always equals their frequency. Here, temporal pitch (periodicity) and frequency coding were dissociated using stimuli that were bandpassed in different frequency spectra (centre frequencies 800 and 4500 Hz), yet were matched in their pitch characteristics. Cortical responses to both pitch-evoking stimuli typically occurred within a region that was also responsive to low frequencies. Its location extended across both primary and nonprimary auditory cortex. An additional control experiment demonstrated that this pitch-related effect was not simply caused by the generation of combination tones. Our findings support recent neurophysiological evidence for a cortical representation of pitch at the lateral border of the primary auditory cortex, while revealing new evidence that additional auditory fields are also likely to play a role in pitch coding
Speaker Normalization Using Cortical Strip Maps: A Neural Model for Steady State vowel Categorization
Auditory signals of speech are speaker-dependent, but representations of language meaning are speaker-independent. The transformation from speaker-dependent to speaker-independent language representations enables speech to be learned and understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitch-independent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by Adaptive Resonance Theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [J. Acoust. Soc. Am. 24, 175-184 (1952)] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
The mechanisms of tinnitus: perspectives from human functional neuroimaging
In this review, we highlight the contribution of advances in human neuroimaging to the current understanding of central mechanisms underpinning tinnitus and explain how interpretations of neuroimaging data have been guided by animal models. The primary motivation for studying the neural substrates of tinnitus in humans has been to demonstrate objectively its representation in the central auditory system and to develop a better understanding of its diverse pathophysiology and of the functional interplay between sensory, cognitive and affective systems. The ultimate goal of neuroimaging is to identify subtypes of tinnitus in order to better inform treatment strategies. The three neural mechanisms considered in this review may provide a basis for TI classification. While human neuroimaging evidence strongly implicates the central auditory system and emotional centres in TI, evidence for the precise contribution from the three mechanisms is unclear because the data are somewhat inconsistent. We consider a number of methodological issues limiting the field of human neuroimaging and recommend approaches to overcome potential inconsistency in results arising from poorly matched participants, lack of appropriate controls and low statistical power
Functional Properties of Human Auditory Cortical Fields
While auditory cortex in non-human primates has been subdivided into multiple functionally specialized auditory cortical fields (ACFs), the boundaries and functional specialization of human ACFs have not been defined. In the current study, we evaluated whether a widely accepted primate model of auditory cortex could explain regional tuning properties of fMRI activations on the cortical surface to attended and non-attended tones of different frequency, location, and intensity. The limits of auditory cortex were defined by voxels that showed significant activations to non-attended sounds. Three centrally located fields with mirror-symmetric tonotopic organization were identified and assigned to the three core fields of the primate model while surrounding activations were assigned to belt fields following procedures similar to those used in macaque fMRI studies. The functional properties of core, medial belt, and lateral belt field groups were then analyzed. Field groups were distinguished by tonotopic organization, frequency selectivity, intensity sensitivity, contralaterality, binaural enhancement, attentional modulation, and hemispheric asymmetry. In general, core fields showed greater sensitivity to sound properties than did belt fields, while belt fields showed greater attentional modulation than core fields. Significant distinctions in intensity sensitivity and contralaterality were seen between adjacent core fields A1 and R, while multiple differences in tuning properties were evident at boundaries between adjacent core and belt fields. The reliable differences in functional properties between fields and field groups suggest that the basic primate pattern of auditory cortex organization is preserved in humans. A comparison of the sizes of functionally defined ACFs in humans and macaques reveals a significant relative expansion in human lateral belt fields implicated in the processing of speech
Tuning of Human Modulation Filters Is Carrier-Frequency Dependent
Licensed under the Creative Commons Attribution License
Recommended from our members
Spectral and temporal processing in human auditory cortex
Hierarchical processing suggests that spectrally and temporally complex stimuli will evoke more activation than do simple stimuli, particularly in non-primary auditory fields. This hypothesis was tested using two tones, a single frequency tone and a harmonic tone, that were either static or frequency modulated to create four stimuli. We interpret the location of differences in activation by drawing comparisons between fMRI and human cytoarchitectonic data, reported in the same brain space. Harmonic tones produced more activation than single tones in right Heschl's gyrus (HG) and bilaterally in the lateral supratemporal plane (STP). Activation was also greater to frequency-modulated tones than to static tones in these areas, plus in left HG and bilaterally in an anterolateral part of the STP and the superior temporal sulcus. An elevated response magnitude to both frequency-modulated tones was found in the lateral portion of the primary area, and putatively in three surrounding non-primary regions on the lateral STP (one anterior and two posterior to HG). A focal site on the posterolateral STP showed an especially high response to the frequency-modulated harmonic tone. Our data highlight the involvement of both primary and lateral non-primary auditory regions
- …