165 research outputs found

    Stimulus-invariant processing and spectrotemporal reverse correlation in primary auditory cortex

    Full text link
    The spectrotemporal receptive field (STRF) provides a versatile and integrated, spectral and temporal, functional characterization of single cells in primary auditory cortex (AI). In this paper, we explore the origin of, and relationship between, different ways of measuring and analyzing an STRF. We demonstrate that STRFs measured using a spectrotemporally diverse array of broadband stimuli -- such as dynamic ripples, spectrotemporally white noise, and temporally orthogonal ripple combinations (TORCs) -- are very similar, confirming earlier findings that the STRF is a robust linear descriptor of the cell. We also present a new deterministic analysis framework that employs the Fourier series to describe the spectrotemporal modulations contained in the stimuli and responses. Additional insights into the STRF measurements, including the nature and interpretation of measurement errors, is presented using the Fourier transform, coupled to singular-value decomposition (SVD), and variability analyses including bootstrap. The results promote the utility of the STRF as a core functional descriptor of neurons in AI.Comment: 42 pages, 8 Figures; to appear in Journal of Computational Neuroscienc

    Stimulus-invariant processing and spectrotemporal reverse correlation in primary auditory cortex

    Get PDF
    The spectrotemporal receptive field (STRF) provides a versatile and integrated, spectral and temporal, functional characterization of single cells in primary auditory cortex (AI). In this paper, we explore the origin of, and relationship between, different ways of measuring and analyzing an STRF. We demonstrate that STRFs measured using a spectrotemporally diverse array of broadband stimuliā€”such as dynamic ripples, spectrotemporally white noise, and temporally orthogonal ripple combinations (TORCs)ā€”are very similar, confirming earlier findings that the STRF is a robust linear descriptor of the cell. We also present a new deterministic analysis framework that employs the Fourier series to describe the spectrotemporal modulations contained in the stimuli and responses. Additional insights into the STRF measurements, including the nature and interpretation of measurement errors, is presented using the Fourier transform, coupled to singular-value decomposition (SVD), and variability analyses including bootstrap. The results promote the utility of the STRF as a core functional descriptor of neurons in A

    Perception and neural coding of harmonic fusion in ferrets

    Get PDF
    The cortical neural correlates for the perception of harmonic sounds have remained a puzzle despite intense study over several decades. This study approached the problem from the point of view of the spectral fusion evoked by such sounds. Experiment 1 tested whether ferrets automatically fuse harmonic complex tones. In baseline sessions, three ferrets were trained to detect a pure tone terminating a sequence of inharmonic complex tones. After the ferrets reached proficiency in the baseline task, a small fraction of the inharmonic complex tones were replaced with harmonic tones. Two out of three ferrets confused the harmonic complex tones with the pure tones and responded as if detecting the pure tone at twice the false-alarm rate, indicating that ferrets can automatically fuse the partials of a harmonic complex. Experiment 2 sought correlates of harmonic fusion in single units of ferret primary auditory cortex (AI), by contrasting responses to harmonic complex tones with those to inharmonic complex tones. The effects of spectrotemporal filtering were accounted for by using the measured spectrotemporal receptive field to predict responses and by seeking correlates of harmonic fusion in the predictability of the responses. Ten percent of units exhibited some correlates of harmonic fusion, which is consistent with previous findings that no special processing for harmonic stimuli occurs in AI

    Decoupling Action Potential Bias from Cortical Local Field Potentials

    Get PDF
    Neurophysiologists have recently become interested in studying neuronal population activity through local field potential (LFP) recordings during experiments that also record the activity of single neurons. This experimental approach differs from early LFP studies because it uses high impendence electrodes that can also isolate single neuron activity. A possible complication for such studies is that the synaptic potentials and action potentials of the small subset of isolated neurons may contribute disproportionately to the LFP signal, biasing activity in the larger nearby neuronal population to appear synchronous and cotuned with these neurons. To address this problem, we used linear filtering techniques to remove features correlated with spike events from LFP recordings. This filtering procedure can be applied for well-isolated single units or multiunit activity. We illustrate the effects of this correction in simulation and on spike data recorded from primary auditory cortex. We find that local spiking activity can explain a significant portion of LFP power at most recording sites and demonstrate that removing the spike-correlated component can affect measurements of auditory tuning of the LFP

    The Case of the Missing Pitch Templates: How Harmonic Templates Emerge in the Early Auditory System

    Get PDF
    Periodicity pitch is the most salient and important of all pitch percepts.Psycho-acoustical models of this percept have long postulated the existenceof internalized harmonic templates against which incoming resolved spectracan be compared, and pitch determined according to the best matchingtemplates cite{goldstein:pitch}. However, it has been a mystery where andhow such harmonic templates can come about. Here we present a biologicallyplausible model for how such templates can form in the early stages of theauditory system. The model demonstrates that {it any} broadband stimulussuch as noise or random click trains, suffices for generating thetemplates, and that there is no need for any delay-lines, oscillators, orother neural temporal structures. The model consists of two key stages:cochlear filtering followed by coincidence detection. The cochlear stageprovides responses analogous to those seen on the auditory-nerve andcochlear nucleus. Specifically, it performs moderately sharp frequencyanalysis via a filter-bank with tonotopically ordered center frequencies(CFs); the rectified and phase-locked filter responses are further enhancedtemporally to resemble the synchronized responses of cells in the cochlearnucleus. The second stage is a matrix of coincidence detectors thatcompute the average pair-wise instantaneous correlation (or product)between responses from all CFs across the channels. Model simulations showthat for any broadband stimulus, high coincidences occur between cochlearchannels that are exactly harmonic distances apart. Accumulatingcoincidences over time results in the formation of harmonic templates forall fundamental frequencies in the phase-locking frequency range. Themodel explains the critical role played by three subtle but importantfactors in cochlear function: the nonlinear transformations following thefiltering stage; the rapid phase-shifts of the traveling wave near itsresonance; and the spectral resolution of the cochlear filters. Finally, wediscuss the physiological correlates and location of such a process and itsresulting templates

    Auditory Short-Term Memory Behaves Like Visual Short-Term Memory

    Get PDF
    Are the information processing steps that support short-term sensory memory common to all the senses? Systematic, psychophysical comparison requires identical experimental paradigms and comparable stimuli, which can be challenging to obtain across modalities. Participants performed a recognition memory task with auditory and visual stimuli that were comparable in complexity and in their neural representations at early stages of cortical processing. The visual stimuli were static and moving Gaussian-windowed, oriented, sinusoidal gratings (Gabor patches); the auditory stimuli were broadband sounds whose frequency content varied sinusoidally over time (moving ripples). Parallel effects on recognition memory were seen for number of items to be remembered, retention interval, and serial position. Further, regardless of modality, predicting an item's recognizability requires taking account of (1) the probe's similarity to the remembered list items (summed similarity), and (2) the similarity between the items in memory (inter-item homogeneity). A model incorporating both these factors gives a good fit to recognition memory data for auditory as well as visual stimuli. In addition, we present the first demonstration of the orthogonality of summed similarity and inter-item homogeneity effects. These data imply that auditory and visual representations undergo very similar transformations while they are encoded and retrieved from memory

    Computational Neural Modeling of Auditory Cortical Receptive Fields

    Get PDF
    Previous studies have shown that the auditory cortex can enhance the perception of behaviorally important sounds in the presence of background noise, but the mechanisms by which it does this are not yet elucidated. Rapid plasticity of spectrotemporal receptive fields (STRFs) in the primary (A1) cortical neurons is observed during behavioral tasks that require discrimination of particular sounds. This rapid task-related change is believed to be one of the processing strategies utilized by the auditory cortex to selectively attend to one stream of sound in the presence of mixed sounds. However, the mechanism by which the brain evokes this rapid plasticity in the auditory cortex remains unclear. This paper uses a neural network model to investigate how synaptic transmission within the cortical neuron network can change the receptive fields of individual neurons. A sound signal was used as input to a model of the cochlea and auditory periphery, which activated or inhibited integrate-and-fire neuron models to represent networks in the primary auditory cortex. Each neuron in the network was tuned to a different frequency. All neurons were interconnected with excitatory or inhibitory synapses of varying strengths. Action potentials in one of the model neurons were used to calculate the receptive field using reverse correlation. The results were directly compared to previously recorded electrophysiological data from ferrets performing behavioral tasks that require discrimination of particular sounds. The neural network model could reproduce complex STRFs observed experimentally through optimizing the synaptic weights in the model. The model predicts that altering synaptic drive between cortical neurons and/or bottom-up synaptic drive from the cochlear model to the cortical neurons can account for rapid task-related changes observed experimentally in A1 neurons. By identifying changes in the synaptic drive during behavioral tasks, the model provides insights into the neural mechanisms utilized by the auditory cortex to enhance the perception of behaviorally salient sounds
    • ā€¦
    corecore