49 research outputs found
Neural Models of Subcortical Auditory Processing
An important feature of the auditory system is its ability to distinguish many simultaneous
sound sources. The primary goal of this work was to understand how a robust, preattentive
analysis of the auditory scene is accomplished by the subcortical auditory system.
Reasonably accurate modelling of the morphology and organisation of the relevant auditory
nuclei, was seen as being of great importance. The formulation of plausible models and their
subsequent simulation was found to be invaluable in elucidating biological processes and in
highlighting areas of uncertainty.
In the thesis, a review of important aspects of mammalian auditory processing is presented
and used as a basis for the subsequent modelling work. For each aspect of auditory
processing modelled, psychophysical results are described and existing models reviewed,
before the models used here are described and simulated. Auditory processes which are
modelled include the peripheral system, and the production of tonotopic maps of the
spectral content of complex acoustic stimuli, and of modulation frequency or periodicity. A
model of the formation of sequential associations between successive sounds is described,
and the model is shown to be capable of emulating a wide range of psychophysical
behaviour. The grouping of related spectral components and the development of pitch
perception is also investigated. Finally a critical assessment of the work and ideas for future
developments are presented.
The principal contributions of this work are the further development of a model for pitch
perception and the development of a novel architecture for the sequential association of
those groups. In the process of developing these ideas, further insights into subcortical
auditory processing were gained, and explanations for a number of puzzling psychophysical
characteristics suggested.Royal Naval Engineering College, Manadon, Plymout
Recommended from our members
Temporal coding of the periodicity of monaural and binaural complex tones in the guinea pig auditory brainstem
Humans report a strong pitch percept in response to a complex tone – the sum of a series of harmonics – presented to either a single ear (‘monaurally’) or both ears (‘diotically’). Interspike interval histograms of responses of neurons in the auditory system to monaural complex tones show a peak at the period of the pitch reported by humans – a ‘neural correlate of pitch’. However, the same pitch percept can be generated by presenting complexes with harmonics distributed across both ears (‘dichotically’). This requires combination of the neural signals underlying pitch from both sides of the auditory system, termed ‘binaural fusion’. Temporal coding generally deteriorates along the auditory pathway; binaural fusion should occur at a relatively early stage. One of the prime candidates is in the superior olivary complex (SOC).
Although the guinea pig auditory system has been extensively studied, this work is the first in vivo investigation of the guinea pig SOC. Cells of the lateral superior olive (LSO) show sensitivity to interaural level differences; medial superior olive (MSO) cells show sensitivity to interaural time differences. Additionally, cells with responses similar to the medial nucleus of the trapezoid body (MNTB) and superior paraolivary nucleus (SPN) of other species were found in the guinea pig SOC. Presumed MNTB cells showed a three-component spike waveform shape; presumed SPN cells responded at the offset of contralaterally-presented stimuli.
MSO and LSO cells respond to the overall pitch of complex tones, even if the monaural waveforms presented to each ear differ; this is consistent with the perception of humans. In contrast, cells of the ventral cochlear nucleus, which provide the main input to MSO and LSO cells, do not show evidence of a binaural pitch response. In conclusion, SOC cells are able to encode the pitch of binaural complex tones in their spike timing patterns.MRC Studentshi
Encoding of Temporal Sound Features in the Rodent Superior Paraolivary Nucleus
The superior paraolivary nucleus (SPON) is a prominent cell group in the mammalian brainstem. SPON neurons are part of a monaural circuit that encodes temporal sound features in the ascending auditory pathway. Such attributes of acoustic signals are critical for speech perception in humans and likely equally as important in animal communication. While basic properties of SPON neurons have been characterized in some detail, a comprehensive examination of mechanisms that underlie their ability to precisely represent temporal information is lacking. Furthermore, little is known of how the SPON impacts its primary target, the inferior colliculus. Combinations of electrophysiological, pharmacological and histological techniques were used to investigate SPON neuronal responses to stimuli whose temporal parameters were systematically varied. In addition, properties of neurons in the inferior colliculus were examined before and after reversible inactivation of the SPON in order to explore its functional role in hearing. An after-hyperpolarization rebound mechanism was shown to generate the hallmark offset response of SPON neurons in vitro. Single-cell labeling techniques provided a detailed morphological description of cell bodies and dendrites and revealed a homogeneous population of neurons. Moreover, subthreshold ionic currents and synaptic neurotransmitter receptor systems were shown to mediate the precision of responses to temporal features of sound in vivo. It was also demonstrated that input from the SPON shapes response properties of inferior colliculus neurons to both periodic and singular temporal stimulus features. Taken together, these results suggest the SPON likely has a substantial role in temporal processing that has not been taken into account in the current understanding of the central auditory system. Demonstrating a functional role for the SPON in hearing will expand our knowledge of neuronal circuits responsible for representing biologically important sounds in both normal hearing and hearing impaired states
Noise-induced cochlear neuronal degeneration and its role in hyperacusis -- and tinnitus-like behavior
Thesis (Ph. D. in Speech and Hearing Bioscience and Technology)--Harvard-MIT Program in Health Sciences and Technology, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 46-57).Perceptual abnormalities such as hyperacusis and tinnitus often occur following acoustic overexposure. Although such exposure can also result in permanent threshold elevation, some individuals with noise-induced hyperacusis or tinnitus show clinically normal thresholds. Recent work in animals has shown that noise exposure can cause permanent degeneration of the cochlear nerve despite complete threshold recovery and lack of hair cell damage (Kujawa and Liberman, J Neurosci 29:14077-14085, 2009). Here we ask whether this noise-induced primary neuronal degeneration results in abnormal auditory behavior, indexed by the acoustic startle response and prepulse inhibition (PPI) of startle. Responses to tones and to broadband noise were measured in mice exposed either to a neuropathic exposure causing primary neuronal degeneration, or to a lower intensity, nonneuropathic noise, and in unexposed controls. Mice with cochlear neuronal loss displayed hyper-responsivity to sound, as evidenced by lower startle thresholds and enhanced PPI, while exposed mice without neuronal loss showed control-like responses. Gap PPI tests, often used to assess tinnitus, revealed spectrally restricted, as well as broadband, gap-detection deficits in mice with primary neuronal degeneration, but not in exposed mice without neuropathy. Crossmodal PPI tests and behavioral assays of anxiety revealed no significant differences among groups, suggesting that the changes in startle-based auditory behavior reflect a neuropathyrelated alteration specifically of auditory neural pathways. Despite significantly reduced cochlear nerve response, seen as reduced wave 1 of the auditory brainstem response, later peaks were unchanged or enhanced, suggesting neural hyperactivity in the auditory brainstem that could underlie the abnormal behavior on the startle tests. Taken together, the results suggest a role for cochlear primary neuronal degeneration in central neural excitability and, by extension, in the generation of tinnitus and hyperacusis.by Ann E. Hickox.Ph.D.in Speech and Hearing Bioscience and Technolog
Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging "periodicity-tagged" segregation of competing speech in rooms.
The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once), in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation). Brainstem circuits help segregate these complex acoustic mixtures into "auditory objects." Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0) modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous. We examine the ability of 129 single units in the ventral cochlear nucleus (VCN) of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels' spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels' spectral energy into two streams (corresponding to the two vowels), on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset) are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging listening conditions.This work was supported by a grant from the BBSRC to Ian M. Winter. Mark Sayles received a University of Cambridge MB/PhD studentship. Tony Watkins (University of Reading, UK) provided the real-room impulse responses. Portions of the data analysis and manuscript preparation were performed by Mark Sayles during the course of an Action on Hearing Loss funded UK–US Fulbright Commission professional scholarship held in the Auditory Neurophysiology and Modeling Laboratory at Purdue University, USA. Mark Sayles is currently supported by a post-doctoral fellowship from Fonds Wetenschappelijk Onderzoek—Vlaanderen, held in the Laboratory of Auditory Neurophysiology at KU Leuven, Belgium.This paper was originally published in Frontiers in Systems Neuroscience (Sayles M, Stasiak A, Winter IM, Frontiers in Systems Neuroscience 2015, 8, 248, doi:10.3389/fnsys.2014.00248)
Role of Inhibition in Binaural Processing
The medial and lateral superior olives (MSO, LSO) are the lowest order cell groups in the mammalian auditory circuit to receive massive binaural input. The MSO functions in part to encode interaural time differences (ITD), the predominant cue for localization of low frequency sounds. Binaural inputs to the MSO consist of excitatory projections from the cochlear nuclei (CN) and inhibitory projections from both the medial nucleus of the trapezoid body (MNTB) and lateral nucleus of the trapezoid body (LNTB). The interaction of excitatory and inhibitory currents within an MSO cell\u27s soma and dendrites over the backdrop of its intrinsic ionic conductances imbues ITD sensitivity to these neurons. Lloyd Jeffress proposed a coincidence detection circuit in which arrays of neurons receive sub-threshold excitatory inputs via delay lines that represent sound location as a place code of activity patterns within the cell group (Jeffress, 1948). The Jeffress place code model later found a neural instantiation in the MSO. Recent in vivo (McAlpine et al., 2001; Brand et al., 2002) studies have shown that peak discharge rates do not fall within the ecological range as the Jeffress model predicts but instead ITD is coded by changes in discharge rate. The timing of inhibition relative to excitation modulates the discharge rates of MSO cells (Brand et al., 2002; Chirila et al., 2007); however, the details of this circuit, such as the onset time of inhibition, are not well known. Although the MNTB and LNTB have been investigated in vivo and in vitro , they have not been well characterized with respect to their function in ITD processing in larger mammals. Additionally, inhibition is modulated by anesthesia and confounds in vivo experiments that examine the careful interplay of excitatory and inhibitory effects in the MSO. For this reason, these physiological experiments were performed on decerebrate unanaesthetized animals. Further investigation of the anatomical organization of inhibitory inputs was carried out as the basis for a comprehensive model of the MSO that incorporates the effects of binaural inhibiting projections to MSO neurons.;Unbiased stereological counts of the MNTB, MSO and subdivisions of the LNTB showed that the MSO and MNTB contain approximately the same number of cells. The main (m)LNTB, posteroventral (pv)LNTB and the hilus (h)LNTB are estimated to contain 3800, 1400, and 200 neurons respectively. Tonotopic organization of the MNTB and MSO show that in the low frequency area, MSO cells outnumber MNTB cells 2 to 1, suggesting a divergent innervation of the MSO from the MNTB. Injection of the retrograde tracer, biotinylated dextrane amine, in the MSO, labeled cells in the MNTB, pvLNTB and mLNTB and defines the important role that these sub-nuclei, and in particular the pvLNTB, have in ITD coding. Computational modeling of a single MSO cell suggests that when two sources of inhibition temporally frame excitation the coincidence detection window is refined and less sensitive to temporal fluctuations that otherwise might degrade ITD sensitivity. Finally, physiological properties of MNTB cells reveal a heterogeneous population of responses and less precise temporal coding than are found in their inputs, globular bushy cells
Recommended from our members
Mode-locked spike trains in responses of ventral cochlear nucleus chopper and onset neurons to periodic stimuli
We report evidence of mode-locking to the envelope of a periodic stimulus in chopper units of the ventral cochlear nucleus (VCN). Mode-locking is a generalized description of how responses in periodically forced nonlinear systems can be closely linked to the input envelope, while showing temporal patterns of higher order than seen during pure phase-locking. Re-analyzing a previously unpublished dataset in response to amplitude modulated tones, we find that of 55% of cells (6/11) demonstrated stochastic mode-locking in response to sinusoidally amplitude modulated (SAM) pure tones at 50% modulation depth. At 100% modulation depth SAM, most units (3/4) showed mode-locking. We use interspike interval (ISI) scattergrams to unravel the temporal structure present in chopper mode-locked responses. These responses compared well to a leaky integrate-and-fire model (LIF) model of chopper units. Thus the timing of spikes in chopper unit responses to periodic stimuli can be understood in terms of the complex dynamics of periodically forced nonlinear systems. A larger set of onset (33) and chopper units (24) of the VCN also shows mode-locked responses to steady-state vowels and cosine-phase harmonic complexes. However, while 80% of chopper responses to complex stimuli meet our criterion for the presence of mode-locking, only 40% of onset cells show similar complex-modes of spike patterns. We found a correlation between a unit’s regularity and its tendency to display mode-locked spike trains as well as a correlation in the number of spikes per cycle and the presence of complex-modes of spike patterns. These spiking patterns are sensitive to the envelope as well as the fundamental frequency of complex sounds, suggesting that complex cell dynamics may play a role in encoding periodic stimuli and envelopes in the VCN
Pitch representations in the auditory nerve : two concurrent complex tones
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 39-43).Pitch differences between concurrent sounds are important cues used in auditory scene analysis and also play a major role in music perception. To investigate the neural codes underlying these perceptual abilities, we recorded from single fibers in the cat auditory nerve in response to two concurrent harmonic complex tones with missing fundamentals and equal-amplitude harmonics. We investigated the efficacy of rate-place and interspike-interval codes to represent both pitches of the two tones, which had fundamental frequency (FO) ratios of 15/14 or 11/9. We relied on the principle of scaling invariance in cochlear mechanics to infer the spatiotemporal response patterns to a given stimulus from a series of measurements made in a single fiber as a function of FO. Templates created by a peripheral auditory model were used to estimate the FOs of double complex tones from the inferred distribution of firing rate along the tonotopic axis. This rate-place representation was accurate for FOs above about 900 Hz. Surprisingly, rate-based FO estimates were accurate even when the two-tone mixture contained no resolved harmonics, so long as some harmonics were resolved prior to mixing. We also extended methods used previously for single complex tones to estimate the FOs of concurrent complex tones from interspike-interval distributions pooled over the tonotopic axis. The interval-based representation was accurate for FOs below about 900 Hz, where the two-tone mixture contained no resolved harmonics. Together, the rate-place and interval-based representations allow accurate pitch perception for concurrent sounds over the entire range of human voice and cat vocalizations.by Erik Larsen.S.M
Recommended from our members
Dual Coding of Frequency Modulation in the Ventral Cochlear Nucleus.
Frequency modulation (FM) is a common acoustic feature of natural sounds and is known to play a role in robust sound source recognition. Auditory neurons show precise stimulus-synchronized discharge patterns that may be used for the representation of low-rate FM. However, it remains unclear whether this representation is based on synchronization to slow temporal envelope (ENV) cues resulting from cochlear filtering or phase locking to faster temporal fine structure (TFS) cues. To investigate the plausibility of those encoding schemes, single units of the ventral cochlear nucleus of guinea pigs of either sex were recorded in response to sine FM tones centered at the unit's best frequency (BF). The results show that, in contrast to high-BF units, for modulation depths within the receptive field, low-BF units (<4 kHz) demonstrate good phase locking to TFS. For modulation depths extending beyond the receptive field, the discharge patterns follow the ENV and fluctuate at the modulation rate. The receptive field proved to be a good predictor of the ENV responses for most primary-like and chopper units. The current in vivo data also reveal a high level of diversity in responses across unit types. TFS cues are mainly conveyed by low-frequency and primary-like units and ENV cues by chopper and onset units. The diversity of responses exhibited by cochlear nucleus neurons provides a neural basis for a dual-coding scheme of FM in the brainstem based on both ENV and TFS cues.SIGNIFICANCE STATEMENT Natural sounds, including speech, convey informative temporal modulations in frequency. Understanding how the auditory system represents those frequency modulations (FM) has important implications as robust sound source recognition depends crucially on the reception of low-rate FM cues. Here, we recorded 115 single-unit responses from the ventral cochlear nucleus in response to FM and provide the first physiological evidence of a dual-coding mechanism of FM via synchronization to temporal envelope cues and phase locking to temporal fine structure cues. We also demonstrate a diversity of neural responses with different coding specializations. These results support the dual-coding scheme proposed by psychophysicists to account for FM sensitivity in humans and provide new insights on how this might be implemented in the early stages of the auditory pathway