623 research outputs found

    Homology and Specificity of Natural Sound-Encoding in Human and Monkey Auditory Cortex

    Get PDF
    Understanding homologies and differences in auditory cortical processing in human and nonhuman primates is an essential step in elucidating the neurobiology of speech and language. Using fMRI responses to natural sounds, we investigated the representation of multiple acoustic features in auditory cortex of awake macaques and humans. Comparative analyses revealed homologous large-scale topographies not only for frequency but also for temporal and spectral modulations. In both species, posterior regions preferably encoded relatively fast temporal and coarse spectral information, whereas anterior regions encoded slow temporal and fine spectral modulations. Conversely, we observed a striking interspecies difference in cortical sensitivity to temporal modulations: While decoding from macaque auditory cortex was most accurate at fast rates (> 30 Hz), humans had highest sensitivity to ~3 Hz, a relevant rate for speech analysis. These findings suggest that characteristic tuning of human auditory cortex to slow temporal modulations is unique and may have emerged as a critical step in the evolution of speech and language

    Doctor of Philosophy

    Get PDF
    dissertationThe primate auditory system is responsible for analyzing complex patterns of pressure differences and then synthesizing this information into a behaviorally relevant representation of the external world. How the auditory cortex accomplishes this complex task is unknown. This thesis examines the neural mechanisms underlying auditory perception in the primate auditory cortex, focusing on the neural representation of communication sounds. This thesis is composed of three studies of auditory cortical processing in the macaque and human. The first examines coding in primary and tertiary auditory cortex as it relates to the possibility for developing a stimulating auditory neural prosthesis. The second study applies an information theoretic approach to understanding information transfer between primary and tertiary auditory cortex. The final study examines visual influences on human tertiary auditory cortical processing during illusory audiovisual speech perception. Together, these studies provide insight into the cortical physiology underlying sound perception and insight into the creation of a stimulating cortical neural prosthesis for the deaf

    Neural Representation of Vocalizations in Noise in the Primary Auditory Cortex of Marmoset Monkeys

    Get PDF
    Robust auditory perception plays a pivotal function in processing behaviorally relevant sounds, particularly when there are auditory distractions from the environment. The neuronal coding enabling this ability, however, is still not well understood. In this study we recorded single-unit activity from the primary auditory cortex of alert common marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise (WGN) and vocalization babble (Babble). Noise effects on single-unit neural representation of target vocalizations were quantified by measuring the response similarity elicited by natural vocalizations as a function of signal-to-noise ratio (SNR). Four consistent response classes (robust, balanced, insensitive, and brittle) were found under both noise conditions, with an average of about two-thirds of the neurons changing their response class when encountering different noises. These results indicate that the distortion induced by one particular masking background in single-unit responses is not necessarily predictable from that induced by another, which further suggests the low likelihood of a unique group of noise-invariant neurons across different background conditions in the primary auditory cortex. In addition, for a relatively large fraction of neurons, strong synchronized responses can be elicited by white noise alone, countering the conventional wisdom that white noise elicits relatively few temporally aligned spikes in higher auditory regions. The variable single-unit responses yet consistent population responses imply that the primate primary auditory cortex performs scene analysis predominately at the population level. Next, by pooling all single units together, pseudo-population analysis was implemented to gain more insight on how individual neurons work together to encode and discriminate vocalizations at various intensities and SNR levels. Population response variability with respect to time was found to synchronize well with the stimulus-driven firing rate of vocalizations at multiple intensities in a negative way. A much weaker trend was observed for vocalizations in noise. By applying dimensionality reduction techniques to the pooled single neuron responses, we were able to visualize the dynamics of neural ensemble responses to vocalizations in noise as trajectories in low-dimensional space. The resulting trajectories showed a clear separation between neural responses to vocalizations and WGN, while trajectories of neural responses to vocalization and Babble were much closer to each other together. Discrimination of neural populations evaluated by neural response classifiers revealed that a finer optimal temporal resolution and longer time scale of temporal dynamics were needed for vocalizations in noise than vocalizations at multiple different intensities. Last, among the whole population, a subpopulation of neurons yielded optimal discrimination performance. Together, for different background noises, the results in this dissertation provide evidence for heterogeneous responses on the individual neuron level, and for consistent response properties on the population level

    Multivoxel codes for representing and integrating acoustic features in human cortex

    Get PDF
    Using fMRI and multivariate pattern analysis, we determined whether acoustic features are represented by independent or integrated neural codes in human cortex. Male and female listeners heard band-pass noise varying simultaneously in spectral (frequency) and temporal (amplitude-modulation [AM] rate) features. In the superior temporal plane, changes in multivoxel activity due to frequency were largely invariant with respect to AM rate (and vice versa), consistent with an independent representation. In contrast, in posterior parietal cortex, neural representation was exclusively integrated and tuned to specific conjunctions of frequency and AM features. Direct between-region comparisons show that whereas independent coding of frequency and AM weakened with increasing levels of the hierarchy, integrated coding strengthened at the transition between non-core and parietal cortex. Our findings support the notion that primary auditory cortex can represent component acoustic features in an independent fashion and suggest a role for parietal cortex in feature integration and the structuring of acoustic input. Significance statement A major goal for neuroscience is discovering the sensory features to which the brain is tuned and how those features are integrated into cohesive perception. We used whole-brain human fMRI and a statistical modeling approach to quantify the extent to which sound features are represented separately or in an integrated fashion in cortical activity patterns. We show that frequency and AM rate, two acoustic features that are fundamental to characterizing biological important sounds such as speech, are represented separately in primary auditory cortex but in an integrated fashion in parietal cortex. These findings suggest that representations in primary auditory cortex can be simpler than previously thought and also implicate a role for parietal cortex in integrating features for coherent perception

    Decoding stimulus identity from multi-unit activity and local field potentials along the ventral auditory stream in the awake primate: implications for cortical neural prostheses

    Get PDF
    Objective. Hierarchical processing of auditory sensory information is believed to occur in two streams: a ventral stream responsible for stimulus identity and a dorsal stream responsible for processing spatial elements of a stimulus. The objective of the current study is to examine neural coding in this processing stream in the context of understanding the possibility for an auditory cortical neural prosthesis. Approach. We examined the selectivity for species-specific primate vocalizations in the ventral auditory processing stream by applying a statistical classifier to neural data recorded from microelectrode arrays. Multi-unit activity (MUA) and local field potential (LFP) data recorded simultaneously from primary auditory complex (AI) and rostral parabelt (PBr) were decoded on a trial-by-trial basis. Main results. While decode performance in AI was well above chance, mean performance in PBr did not deviate >15% from chance level. Mean performance levels were similar for MUA and LFP decodes. Increasing the spectral and temporal resolution improved decode performance; while inter-electrode spacing could be as large as 1.14 mm without degrading decode performance. Significance. These results serve as preliminary guidance for a human auditory cortical neural prosthesis; instructing interface implementation, microstimulation patterns and anatomical placement
    corecore