191 research outputs found
Stimulus-invariant processing and spectrotemporal reverse correlation in primary auditory cortex
The spectrotemporal receptive field (STRF) provides a versatile and
integrated, spectral and temporal, functional characterization of single cells
in primary auditory cortex (AI). In this paper, we explore the origin of, and
relationship between, different ways of measuring and analyzing an STRF. We
demonstrate that STRFs measured using a spectrotemporally diverse array of
broadband stimuli -- such as dynamic ripples, spectrotemporally white noise,
and temporally orthogonal ripple combinations (TORCs) -- are very similar,
confirming earlier findings that the STRF is a robust linear descriptor of the
cell. We also present a new deterministic analysis framework that employs the
Fourier series to describe the spectrotemporal modulations contained in the
stimuli and responses. Additional insights into the STRF measurements,
including the nature and interpretation of measurement errors, is presented
using the Fourier transform, coupled to singular-value decomposition (SVD), and
variability analyses including bootstrap. The results promote the utility of
the STRF as a core functional descriptor of neurons in AI.Comment: 42 pages, 8 Figures; to appear in Journal of Computational
Neuroscienc
The Mirrornet : Learning Audio Synthesizer Controls Inspired by Sensorimotor Interaction
Experiments to understand the sensorimotor neural interactions in the human
cortical speech system support the existence of a bidirectional flow of
interactions between the auditory and motor regions. Their key function is to
enable the brain to `learn' how to control the vocal tract for speech
production. This idea is the impetus for the recently proposed "MirrorNet", a
constrained autoencoder architecture. In this paper, the MirrorNet is applied
to learn, in an unsupervised manner, the controls of a specific audio
synthesizer (DIVA) to produce melodies only from their auditory spectrograms.
The results demonstrate how the MirrorNet discovers the synthesizer parameters
to generate the melodies that closely resemble the original and those of unseen
melodies, and even determine the best set parameters to approximate renditions
of complex piano melodies generated by a different synthesizer. This
generalizability of the MirrorNet illustrates its potential to discover from
sensory data the controls of arbitrary motor-plants
Stimulus-invariant processing and spectrotemporal reverse correlation in primary auditory cortex
The spectrotemporal receptive field (STRF) provides a versatile and integrated, spectral and temporal, functional characterization of single cells in primary auditory cortex (AI). In this paper, we explore the origin of, and relationship between, different ways of measuring and analyzing an STRF. We demonstrate that STRFs measured using a spectrotemporally diverse array of broadband stimuliāsuch as dynamic ripples, spectrotemporally white noise, and temporally orthogonal ripple combinations (TORCs)āare very similar, confirming earlier findings that the STRF is a robust linear descriptor of the cell. We also present a new deterministic analysis framework that employs the Fourier series to describe the spectrotemporal modulations contained in the stimuli and responses. Additional insights into the STRF measurements, including the nature and interpretation of measurement errors, is presented using the Fourier transform, coupled to singular-value decomposition (SVD), and variability analyses including bootstrap. The results promote the utility of the STRF as a core functional descriptor of neurons in A
Perception and neural coding of harmonic fusion in ferrets
The cortical neural correlates for the perception of harmonic sounds have remained a puzzle despite intense study over several decades. This study approached the problem from the point of view of the spectral fusion evoked by such sounds. Experiment 1 tested whether ferrets automatically fuse harmonic complex tones. In baseline sessions, three ferrets were trained to detect a pure tone terminating a sequence of inharmonic complex tones. After the ferrets reached proficiency in the baseline task, a small fraction of the inharmonic complex tones were replaced with harmonic tones. Two out of three ferrets confused the harmonic complex tones with the pure tones and responded as if detecting the pure tone at twice the false-alarm rate, indicating that ferrets can automatically fuse the partials of a harmonic complex. Experiment 2 sought correlates of harmonic fusion in single units of ferret primary auditory cortex (AI), by contrasting responses to harmonic complex tones with those to inharmonic complex tones. The effects of spectrotemporal filtering were accounted for by using the measured spectrotemporal receptive field to predict responses and by seeking correlates of harmonic fusion in the predictability of the responses. Ten percent of units exhibited some correlates of harmonic fusion, which is consistent with previous findings that no special processing for harmonic stimuli occurs in AI
Decoupling Action Potential Bias from Cortical Local Field Potentials
Neurophysiologists have recently become interested in studying neuronal population activity through local field potential (LFP) recordings during experiments that also record the activity of single neurons. This experimental approach differs from early LFP studies because it uses high impendence electrodes that can also isolate single neuron activity. A possible complication for such studies is that the synaptic potentials and action potentials of the small subset of isolated neurons may contribute disproportionately to the LFP signal, biasing activity in the larger nearby neuronal population to appear synchronous and cotuned with these neurons. To address this problem, we used linear filtering techniques to remove features correlated with spike events from LFP recordings. This filtering procedure can be applied for well-isolated single units or multiunit activity. We illustrate the effects of this correction in simulation and on spike data recorded from primary auditory cortex. We find that local spiking activity can explain a significant portion of LFP power at most recording sites and demonstrate that removing the spike-correlated component can affect measurements of auditory tuning of the LFP
The Case of the Missing Pitch Templates: How Harmonic Templates Emerge in the Early Auditory System
Periodicity pitch is the most salient and important of all pitch percepts.Psycho-acoustical models of this percept have long postulated the existenceof internalized harmonic templates against which incoming resolved spectracan be compared, and pitch determined according to the best matchingtemplates cite{goldstein:pitch}. However, it has been a mystery where andhow such harmonic templates can come about. Here we present a biologicallyplausible model for how such templates can form in the early stages of theauditory system. The model demonstrates that {it any} broadband stimulussuch as noise or random click trains, suffices for generating thetemplates, and that there is no need for any delay-lines, oscillators, orother neural temporal structures. The model consists of two key stages:cochlear filtering followed by coincidence detection. The cochlear stageprovides responses analogous to those seen on the auditory-nerve andcochlear nucleus. Specifically, it performs moderately sharp frequencyanalysis via a filter-bank with tonotopically ordered center frequencies(CFs); the rectified and phase-locked filter responses are further enhancedtemporally to resemble the synchronized responses of cells in the cochlearnucleus. The second stage is a matrix of coincidence detectors thatcompute the average pair-wise instantaneous correlation (or product)between responses from all CFs across the channels. Model simulations showthat for any broadband stimulus, high coincidences occur between cochlearchannels that are exactly harmonic distances apart. Accumulatingcoincidences over time results in the formation of harmonic templates forall fundamental frequencies in the phase-locking frequency range. Themodel explains the critical role played by three subtle but importantfactors in cochlear function: the nonlinear transformations following thefiltering stage; the rapid phase-shifts of the traveling wave near itsresonance; and the spectral resolution of the cochlear filters. Finally, wediscuss the physiological correlates and location of such a process and itsresulting templates
Measurement of head-related transfer functions based on the empirical transfer function estimate
Proceedings of the 9th International Conference on Auditory Display (ICAD), Boston, MA, July 7-9, 2003.An experimental procedure and signal processing method to measure Head Related Transfer Functions (HRTFs) is reviewed. The technique based on Fourier analysis system identification has an advantage over the commonly used maximum-length sequence and Golay methods if nonlinear distortions are present in the loudspeakers and their power amplification circuits. The method has been used to produce a new public HRTF database. Compared to existent public domain databases, these transfer functions have been measured at points spaced more densely and uniformly around the subject, which poses an advantage for fitting and interpolating methods. The measured HRTFs (seven subjects to date) are available by request from the authors
Auditory Short-Term Memory Behaves Like Visual Short-Term Memory
Are the information processing steps that support short-term sensory memory common to all the senses? Systematic, psychophysical comparison requires identical experimental paradigms and comparable stimuli, which can be challenging to obtain across modalities. Participants performed a recognition memory task with auditory and visual stimuli that were comparable in complexity and in their neural representations at early stages of cortical processing. The visual stimuli were static and moving Gaussian-windowed, oriented, sinusoidal gratings (Gabor patches); the auditory stimuli were broadband sounds whose frequency content varied sinusoidally over time (moving ripples). Parallel effects on recognition memory were seen for number of items to be remembered, retention interval, and serial position. Further, regardless of modality, predicting an item's recognizability requires taking account of (1) the probe's similarity to the remembered list items (summed similarity), and (2) the similarity between the items in memory (inter-item homogeneity). A model incorporating both these factors gives a good fit to recognition memory data for auditory as well as visual stimuli. In addition, we present the first demonstration of the orthogonality of summed similarity and inter-item homogeneity effects. These data imply that auditory and visual representations undergo very similar transformations while they are encoded and retrieved from memory
- ā¦