9 research outputs found
Responses of Auditory Nerve and Anteroventral Cochlear Nucleus Fibers to Broadband and Narrowband Noise: Implications for the Sensitivity to Interaural Delays
The quality of temporal coding of sound waveforms in the monaural afferents that converge on binaural neurons in the brainstem limits the sensitivity to temporal differences at the two ears. The anteroventral cochlear nucleus (AVCN) houses the cells that project to the binaural nuclei, which are known to have enhanced temporal coding of low-frequency sounds relative to auditory nerve (AN) fibers. We applied a coincidence analysis within the framework of detection theory to investigate the extent to which AVCN processing affects interaural time delay (ITD) sensitivity. Using monaural spike trains to a 1-s broadband or narrowband noise token, we emulated the binaural task of ITD discrimination and calculated just noticeable differences (jnds). The ITD jnds derived from AVCN neurons were lower than those derived from AN fibers, showing that the enhanced temporal coding in the AVCN improves binaural sensitivity to ITDs. AVCN processing also increased the dynamic range of ITD sensitivity and changed the shape of the frequency dependence of ITD sensitivity. Bandwidth dependence of ITD jnds from AN as well as AVCN fibers agreed with psychophysical data. These findings demonstrate that monaural preprocessing in the AVCN improves the temporal code in a way that is beneficial for binaural processing and may be crucial in achieving the exquisite sensitivity to ITDs observed in binaural pathways
Where mathematics and hearing science meet : low peak factor signals and their role in hearing research
In his scientific work, Manfred Schroeder touched many different areas within acoustics. Two disciplines repeatedly show up when his contributions are characterized: his strong interest in mathematics and his interest in the perceptual side of acoustics. In this chapter, we focus on the latter. We will first give a compressed account of Schroeder’s direct contributions to psychoacoustics, and emphasize the relation with other acoustics disciplines like speech processing and room acoustics. In the main part of the chapter we will then describe psychoacoustic work being based on or inspired by ideas from Manfred Schroeder. Due to Schroeder’s success in securing a modern online computer for the Drittes Physikalisches Institut after returning to Göttingen in 1969, his research students had a head start in using digital signal processing in room acoustics for digital sound field synthesis and in introducing digital computers into experimental and theoretical hearing research. Since then, the freedom to construct and use specific acoustic stimuli in behavioral and also physiological research has grown steadily, making it possible to test many of Schroeder’s early ideas in behavioral experiments and applications. In parallel, computer models of auditory perception allowed users to analyze and predict how specific properties of acoustic stimuli influence the perception of a listener. As in other fields of physics, the close interplay between experimental tests and quantitative models has been shown to be essential in advancing our understanding of human hearing
Cortical mechanisms of spatial hearing
Humans and other animals use spatial hearing to rapidly localize events in the environment. However, neural encoding of sound location is a complex process involving the computation and integration of multiple spatial cues that are not represented directly in the sensory organ (the cochlea). Our understanding of these mechanisms has increased enormously in the past few years. Current research is focused on the contribution of animal models for understanding human spatial audition, the effects of behavioural demands on neural sound location encoding, the emergence of a cue-independent location representation in the auditory cortex, and the relationship between single-source and concurrent location encoding in complex auditory scenes. Furthermore, computational modelling seeks to unravel how neural representations of sound source locations are derived from the complex binaural waveforms of real-life sounds. In this article, we review and integrate the latest insights from neurophysiological, neuroimaging and computational modelling studies of mammalian spatial hearing. We propose that the cortical representation of sound location emerges from recurrent processing taking place in a dynamic, adaptive network of early (primary) and higher-order (posterior-dorsal and dorsolateral prefrontal) auditory regions. This cortical network accommodates changing behavioural requirements and is especially relevant for processing the location of real-life, complex sounds and complex auditory scenes