2 research outputs found

    Physiology of Higher Central Auditory Processing and Plasticity

    Get PDF
    Binaural cue processing requires central auditory function as damage to the auditory cortex and other cortical regions impairs sound localization. Sound localization cues are initially extracted by brainstem nuclei, but how the cerebral cortex supports spatial sound perception remains unclear. This chapter reviews the evidence that spatial encoding within and beyond the auditory cortex supports sound localization, including the integration of information across sound frequencies and localization cues. In particular, this chapter discusses the role of brain regions across the cerebral cortex that may be specialized for extracting and transforming the spatial aspects of sounds and extends from sensory to parietal and prefrontal cortices. The chapter considers how the encoding of spatial information changes with attention and how spatial processing fits within the broader context of auditory scene analysis by cortical networks. The importance of neural plasticity in binaural processing is outlined, including a discussion of how changes in the mapping of localization cues to spatial position allow listeners to adapt to changes in auditory input throughout life and after hearing loss. The chapter ends by summarizing some of the open questions about the central processing of binaural cues and how they may be answered

    What's what in auditory cortices?

    No full text
    Distinct anatomical and functional pathways are postulated for analysing a sound's object-related ('what') and space-related ('where') information. It remains unresolved to which extent distinct or overlapping neural resources subserve specific object-related dimensions (i.e. who is speaking and what is being said can both be derived from the same acoustic input). To address this issue, we recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to their pitch, speaker identity, uttered syllable ('what' dimensions) or their location ('where'). Sound acoustics were held constant across blocks; the only manipulation involved the sound dimension that participants had to attend to. The task-relevant dimension was varied across blocks. AEPs from healthy participants were analysed within an electrical neuroimaging framework to differentiate modulations in response strength from modulations in response topography; the latter of which forcibly follow from changes in the configuration of underlying sources. There were no behavioural differences in discrimination of sounds across the 4 feature dimensions. As early as 90ms post-stimulus onset, AEP topographies differed across 'what' conditions, supporting a functional sub-segregation within the auditory 'what' pathway. This study characterises the spatio-temporal dynamics of segregated, yet parallel, processing of multiple sound object-related feature dimensions when selective attention is directed to them
    corecore