55 research outputs found

    Short Term Depression Unmasks the Ghost Frequency

    Get PDF
    Short Term Plasticity (STP) has been shown to exist extensively in synapses throughout the brain. Its function is more or less clear in the sense that it alters the probability of synaptic transmission at short time scales. However, it is still unclear what effect STP has on the dynamics of neural networks. We show, using a novel dynamic STP model, that Short Term Depression (STD) can affect the phase of frequency coded input such that small networks can perform temporal signal summation and determination with high accuracy. We show that this property of STD can readily solve the problem of the ghost frequency, the perceived pitch of a harmonic complex in absence of the base frequency. Additionally, we demonstrate that this property can explain dynamics in larger networks. By means of two models, one of chopper neurons in the Ventral Cochlear Nucleus and one of a cortical microcircuit with inhibitory Martinotti neurons, it is shown that the dynamics in these microcircuits can reliably be reproduced using STP. Our model of STP gives important insights into the potential roles of STP in self-regulation of cortical activity and long-range afferent input in neuronal microcircuits

    Neural Models of Subcortical Auditory Processing

    Get PDF
    An important feature of the auditory system is its ability to distinguish many simultaneous sound sources. The primary goal of this work was to understand how a robust, preattentive analysis of the auditory scene is accomplished by the subcortical auditory system. Reasonably accurate modelling of the morphology and organisation of the relevant auditory nuclei, was seen as being of great importance. The formulation of plausible models and their subsequent simulation was found to be invaluable in elucidating biological processes and in highlighting areas of uncertainty. In the thesis, a review of important aspects of mammalian auditory processing is presented and used as a basis for the subsequent modelling work. For each aspect of auditory processing modelled, psychophysical results are described and existing models reviewed, before the models used here are described and simulated. Auditory processes which are modelled include the peripheral system, and the production of tonotopic maps of the spectral content of complex acoustic stimuli, and of modulation frequency or periodicity. A model of the formation of sequential associations between successive sounds is described, and the model is shown to be capable of emulating a wide range of psychophysical behaviour. The grouping of related spectral components and the development of pitch perception is also investigated. Finally a critical assessment of the work and ideas for future developments are presented. The principal contributions of this work are the further development of a model for pitch perception and the development of a novel architecture for the sequential association of those groups. In the process of developing these ideas, further insights into subcortical auditory processing were gained, and explanations for a number of puzzling psychophysical characteristics suggested.Royal Naval Engineering College, Manadon, Plymout

    Coincidence detection in the cochlear nucleus : implications for the coding of pitch

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 165-177).The spatio-temporal pattern in the auditory nerve (AN), i.e. the temporal pattern of AN fiber activity across the tonotopic axis, provides cues to important features in sounds such as pitch, loudness, and spatial location. These spatio-temporal cues may be extracted by central neurons in the cochlear nucleus (CN) that receive inputs from AN fibers innervating different cochlear regions and are sensitive to their relative timing. One possible mechanism for this extraction is cross-frequency coincidence detection (CD), in which a central neuron converts the degree of cross-frequency coincidence in the AN into a rate response by preferentially firing when its AN inputs across the tonotopic axis discharge in synchrony. We implemented a CD model receiving AN inputs from varying extents of the tonotopic axis, and compared responses of model CD cells with those of single units recorded in the CN of the anesthetized cat. We used Huffman stimuli, which have flat magnitude spectra and a single phase transition, to systematically manipulate the relative timing across AN fibers and to evaluate the sensitivity of model CD cells and CN units to the spatiotemporal pattern of AN discharges. Using a maximum likelihood approach, we found that certain unit types (primary-like-with-notch and some phase lockers) had responses consistent with cross-frequency CD cell. Some of these CN units provide input to neurons in a binaural circuit that process cues for sound localization and are sensitive to interaural level differences. A possible functional role of a cross-frequency CD mechanism in the CN is to increase the dynamic range of these binaural neurons. However, many other CN units had responses more consistent with AN fibers than with CD cells. We hypothesized that CN units resembling cross-frequency CD cells (as determined by their responses to Huffman stimuli) would convert spatio-temporal cues to pitch in the AN into rate cues that are robust with level. We found that, in response to harmonic complex tones, cross-frequency CD cells and some CN units (primary-like-with-notch and choppers) maintained robust rate cues at high levels compared to AN fibers, suggesting that at least some CN neurons extend the dynamic range of rate representations of pitch beyond that found in AN fibers. However, there was no obvious correlation between robust rate cues in individual CN units and similarity to cross-frequency CD cells as determined by responses to Huffman stimuli. It is likely that a model including more realistic inputs, membrane channels, and spiking mechanism, or other mechanisms such as lateral inhibition or spatial and temporal summation over spatially distributed inputs would provide insight into the neural mechanisms that give rise to the robust rate cues observed in some CN units.by Grace I. Wang.Ph.D

    Target-Specific IPSC Kinetics Promote Temporal Processing in Auditory Parallel Pathways

    Get PDF
    The acoustic environment contains biologically relevant information on time scales from microseconds to tens of seconds. The auditory brainstem nuclei process this temporal information through parallel pathways that originate in the cochlear nucleus from different classes of cells. While the roles of ion channels and excitatory synapses in temporal processing have been well studied, the contribution of inhibition is less well understood. Here, we show in CBA/CaJ mice that the two major projection neurons of the ventral cochlear nucleus, the bushy and T-stellate cells, receive glycinergic inhibition with different synaptic conductance time courses. Bushy cells, which provide precisely timed spike trains used in sound localization and pitch identification, receive slow inhibitory inputs. In contrast, T-stellate cells, which encode slower envelope information, receive inhibition that is eight-fold faster. Both types of inhibition improved the precision of spike timing, but engage different cellular mechanisms and operate on different time scales. Computer models reveal that slow IPSCs in bushy cells can improve spike timing on the scale of tens of microseconds. While fast and slow IPSCs in T-stellate cells improve spike timing on the scale of milliseconds, only fast IPSCs can enhance the detection of narrowband acoustic signals in a complex background. Our results suggest that target-specific IPSC kinetics are critical for the segregated parallel processing of temporal information from the sensory environment

    A Comparative Study of Computational Models of Auditory Peripheral System

    Full text link
    A deep study about the computational models of the auditory peripheral system from three different research groups: Carney, Meddis and Hemmert, is presented here. The aim is to find out which model fits the data best and which properties of the models are relevant for speech recognition. To get a first approximation, different tests with tones have been performed with seven models. Then we have evaluated the results of these models in the presence of speech. Therefore, two models were studied deeply through an automatic speech recognition (ASR) system, in clean and noisy background and for a diversity of sound levels. The post stimulus time histogram help us to see how the models that improved the offset adaptation present the ¿dead time¿. For its part, the synchronization evaluation for tones and modulated signals, have highlighted the better result from the models with offset adaptation. Finally, tuning curves and Q10dB (added to ASR results) on contrary have indicated that the selectivity is not a property needed for speech recognition. Besides the evaluation of the models with ASR have demonstrated the outperforming of models with offset adaptation and the triviality of using cat or human tuning for speech recognition. With this results, we conclude that mostly the model that better fits the data is the one described by Zilany et al. (2009) and the property unquestionable for speech recognition would be a good offset adaptation that offers a better synchronization and a better ASR result. For ASR system it makes no big difference if offset adaptation comes from a shift of the auditory nerve response or from a power law adaptation in the synapse.Vendrell Llopis, N. (2010). A Comparative Study of Computational Models of Auditory Peripheral System. http://hdl.handle.net/10251/20433.Archivo delegad

    Biophysical modeling of a cochlear implant system: progress on closed-loop design using a novel patient-specific evaluation platform

    Get PDF
    The modern cochlear implant is one of the most successful neural stimulation devices, which partially mimics the workings of the auditory periphery. In the last few decades it has created a paradigm shift in hearing restoration of the deaf population, which has led to more than 324,000 cochlear implant users today. Despite its great success there is great disparity in patient outcomes without clear understanding of the aetiology of this variance in implant performance. Furthermore speech recognition in adverse conditions or music appreciation is still not attainable with today's commercial technology. This motivates the research for the next generation of cochlear implants that takes advantage of recent developments in electronics, neuroscience, nanotechnology, micro-mechanics, polymer chemistry and molecular biology to deliver high fidelity sound. The main difficulties in determining the root of the problem in the cases where the cochlear implant does not perform well are two fold: first there is not a clear paradigm on how the electrical stimulation is perceived as sound by the brain, and second there is limited understanding on the plasticity effects, or learning, of the brain in response to electrical stimulation. These significant knowledge limitations impede the design of novel cochlear implant technologies, as the technical specifications that can lead to better performing implants remain undefined. The motivation of the work presented in this thesis is to compare and contrast the cochlear implant neural stimulation with the operation of the physiological healthy auditory periphery up to the level of the auditory nerve. As such design of novel cochlear implant systems can become feasible by gaining insight on the question `how well does a specific cochlear implant system approximate the healthy auditory periphery?' circumventing the necessity of complete understanding of the brain's comprehension of patterned electrical stimulation delivered from a generic cochlear implant device. A computational model, termed Digital Cochlea Stimulation and Evaluation Tool (‘DiCoStET’) has been developed to provide an objective estimate of cochlear implant performance based on neuronal activation measures, such as vector strength and average activation. A patient-specific cochlea 3D geometry is generated using a model derived by a single anatomical measurement from a patient, using non-invasive high resolution computed tomography (HRCT), and anatomically invariant human metrics and relations. Human measurements of the neuron route within the inner ear enable an innervation pattern to be modelled which joins the space from the organ of Corti to the spiral ganglion subsequently descending into the auditory nerve bundle. An electrode is inserted in the cochlea at a depth that is determined by the user of the tool. The geometric relation between the stimulation sites on the electrode and the spiral ganglion are used to estimate an activating function that will be unique for the specific patient's cochlear shape and electrode placement. This `transfer function', so to speak, between electrode and spiral ganglion serves as a `digital patient' for validating novel cochlear implant systems. The novel computational tool is intended for use by bioengineers, surgeons, audiologists and neuroscientists alike. In addition to ‘DiCoStET’ a second computational model is presented in this thesis aiming at enhancing the understanding of the physiological mechanisms of hearing, specifically the workings of the auditory synapse. The purpose of this model is to provide insight on the sound encoding mechanisms of the synapse. A hypothetical mechanism is suggested in the release of neurotransmitter vesicles that permits the auditory synapse to encode temporal patterns of sound separately from sound intensity. DiCoStET was used to examine the performance of two different types of filters used for spectral analysis in the cochlear implant system, the Gammatone type filter and the Butterworth type filter. The model outputs suggest that the Gammatone type filter performs better than the Butterworth type filter. Furthermore two stimulation strategies, the Continuous Interleaved Stimulation (CIS) and Asynchronous Interleaved Stimulation (AIS) have been compared. The estimated neuronal stimulation spatiotemporal patterns for each strategy suggest that the overall stimulation pattern is not greatly affected by the temporal sequence change. However the finer detail of neuronal activation is different between the two strategies, and when compared to healthy neuronal activation patterns the conjecture is made that the sequential stimulation of CIS hinders the transmission of sound fine structure information to the brain. The effect of the two models developed is the feasibility of collaborative work emanating from various disciplines; especially electrical engineering, auditory physiology and neuroscience for the development of novel cochlear implant systems. This is achieved by using the concept of a `digital patient' whose artificial neuronal activation is compared to a healthy scenario in a computationally efficient manner to allow practical simulation times.Open Acces

    Modeling auditory evoked potentials to complex stimuli

    Get PDF

    Treatise on Hearing: The Temporal Auditory Imaging Theory Inspired by Optics and Communication

    Full text link
    A new theory of mammalian hearing is presented, which accounts for the auditory image in the midbrain (inferior colliculus) of objects in the acoustical environment of the listener. It is shown that the ear is a temporal imaging system that comprises three transformations of the envelope functions: cochlear group-delay dispersion, cochlear time lensing, and neural group-delay dispersion. These elements are analogous to the optical transformations in vision of diffraction between the object and the eye, spatial lensing by the lens, and second diffraction between the lens and the retina. Unlike the eye, it is established that the human auditory system is naturally defocused, so that coherent stimuli do not react to the defocus, whereas completely incoherent stimuli are impacted by it and may be blurred by design. It is argued that the auditory system can use this differential focusing to enhance or degrade the images of real-world acoustical objects that are partially coherent. The theory is founded on coherence and temporal imaging theories that were adopted from optics. In addition to the imaging transformations, the corresponding inverse-domain modulation transfer functions are derived and interpreted with consideration to the nonuniform neural sampling operation of the auditory nerve. These ideas are used to rigorously initiate the concepts of sharpness and blur in auditory imaging, auditory aberrations, and auditory depth of field. In parallel, ideas from communication theory are used to show that the organ of Corti functions as a multichannel phase-locked loop (PLL) that constitutes the point of entry for auditory phase locking and hence conserves the signal coherence. It provides an anchor for a dual coherent and noncoherent auditory detection in the auditory brain that culminates in auditory accommodation. Implications on hearing impairments are discussed as well.Comment: 603 pages, 131 figures, 13 tables, 1570 reference

    Investigations of the effects of sequential tones on the responses of neurons in the guinea pig primary auditory cortex

    Get PDF
    The auditory system needs to be able to analyse complex acoustic waveforms. Many ecologically relevant sounds, for example speech and animal calls, vary over time. This thesis investigates how the auditory system processes sounds that occur sequentially. The focus is on how the responses of neurons in the primary auditory cortex ‘adapt’ when there are two or more tones. When two sounds are presented in quick succession, the neural response to the second sound can decrease relative to when it is presented alone. Previous two-tone experiments have not determined whether the frequency tuning of cortical suppression was determined by the receptive field of the neuron or the exact relationship between the frequencies of the two tones. In the first experiment, it is shown that forward suppression does depend on the relationship between the two tones. This confirmed that cortical forward suppression is ‘frequency specific’ at the shortest possible timescale. Sequences of interleaved tones with two different frequencies have been used to investigate the perceptual grouping of sequential sounds. A neural correlate of this auditory streaming has been demonstrated in awake monkeys, birds and bats. The second experiment investigates the responses of neurons in the primary auditory cortex of anaesthetised guinea pigs to alternating tone sequences. The responses are generally consistent with awake recordings, though adaptation was more rapid and at fast rates, responses were often poorly synchronised to the tones. In the third experiment, the way in which responses to tone sequences build up is investigated by varying the number of tones that are presented before a probe tone. The suppression that is observed is again strongest when the frequency of the two tones is similar. However, the frequencies to which a neuron preferentially responds remain irrespective of the frequency and number of preceding tones. This implies that through frequency specific adaptation neurons become more selective to their preferred stimuli in the presence of a preceding stimulus
    • …
    corecore