8 research outputs found
The synaptic representation of sound source location in primary auditory cortex
Akey function of the auditory system is to provide reliable information about the location of sound sources. Here, we describe how sound location is represented by synaptic input arriving onto pyramidal cells within auditory cortex by combining free-field acoustic stimulation in the frontal azimuthal plane with in vivo whole-cell recordings. We found that subthreshold activity was panoramic in that EPSPs
could be evoked from all locations in all cells. Regardless of the sound location that evoked the largest EPSP, we observed a slowing in the EPSP slope along the contralateral–ipsilateral plane that was reflected in a temporal sequence of peak EPSP times. Contralateral sounds
evoked EPSPs with earlier peak times and consequently generated action potential firing with shorter latencies than ipsilateral sounds.
Thus, whereas spiking probability reflected the region of space evoking the largest EPSP, across the population, synaptic inputs enforced a gradient of spike latency and precision along the horizontal axis. Therefore, within auditory cortex and regardless of preferred location,
the time window of synaptic integration reflects sound source location and ensures that spatial acoustic information is represented by relative timings of pyramidal cell output
Hierarchical Bayesian level set inversion
The level set approach has proven widely successful in the study of inverse problems for inter- faces, since its systematic development in the 1990s. Re- cently it has been employed in the context of Bayesian inversion, allowing for the quantification of uncertainty within the reconstruction of interfaces. However the Bayesian approach is very sensitive to the length and amplitude scales in the prior probabilistic model. This paper demonstrates how the scale-sensitivity can be cir- cumvented by means of a hierarchical approach, using a single scalar parameter. Together with careful con- sideration of the development of algorithms which en- code probability measure equivalences as the hierar- chical parameter is varied, this leads to well-defined Gibbs based MCMC methods found by alternating Metropolis-Hastings updates of the level set function and the hierarchical parameter. These methods demon- strably outperform non-hierarchical Bayesian level set methods
Frequency-Invariant Representation of Interaural Time Differences in Mammals
Interaural time differences (ITDs) are the major cue for localizing low-frequency sounds. The activity of neuronal populations in the brainstem encodes ITDs with an exquisite temporal acuity of about . The response of single neurons, however, also changes with other stimulus properties like the spectral composition of sound. The influence of stimulus frequency is very different across neurons and thus it is unclear how ITDs are encoded independently of stimulus frequency by populations of neurons. Here we fitted a statistical model to single-cell rate responses of the dorsal nucleus of the lateral lemniscus. The model was used to evaluate the impact of single-cell response characteristics on the frequency-invariant mutual information between rate response and ITD. We found a rough correspondence between the measured cell characteristics and those predicted by computing mutual information. Furthermore, we studied two readout mechanisms, a linear classifier and a two-channel rate difference decoder. The latter turned out to be better suited to decode the population patterns obtained from the fitted model
The synaptic representation of sound source location in auditory cortex.
A key function of the auditory system is to provide reliable information about the location of sound sources. Here, we describe how sound location is represented by synaptic input arriving onto pyramidal cells within auditory cortex by combining free-field acoustic stimulation in the frontal azimuthal plane with in vivo whole-cell recordings. We found that subthreshold activity was panoramic in that EPSPs could be evoked from all locations in all cells. Regardless of the sound location that evoked the largest EPSP, we observed a slowing in the EPSP slope along the contralateral-ipsilateral plane that was reflected in a temporal sequence of peak EPSP times. Contralateral sounds evoked EPSPs with earlier peak times and consequently generated action potential firing with shorter latencies than ipsilateral sounds. Thus, whereas spiking probability reflected the region of space evoking the largest EPSP, across the population, synaptic inputs enforced a gradient of spike latency and precision along the horizontal axis. Therefore, within auditory cortex and regardless of preferred location, the time window of synaptic integration reflects sound source location and ensures that spatial acoustic information is represented by relative timings of pyramidal cell output
