36,559 research outputs found

    Encoding of Intention and Spatial Location in the Posterior Parietal Cortex

    Get PDF
    The posterior parietal cortex is functionally situated between sensory cortex and motor cortex. The responses of cells in this area are difficult to classify as strictly sensory or motor, since many have both sensory- and movement-related activities, as well as activities related to higher cognitive functions such as attention and intention. In this review we will provide evidence that the posterior parietal cortex is an interface between sensory and motor structures and performs various functions important for sensory-motor integration. The review will focus on two specific sensory-motor tasks-the formation of motor plans and the abstract representation of space. Cells in the lateral intraparietal area, a subdivision of the parietal cortex, have activity related to eye movements the animal intends to make. This finding represents the lowest stage in the sensory-motor cortical pathway in which activity related to intention has been found and may represent the cortical stage in which sensory signals go "over the hump" to become intentions and plans to make movements. The second part of the review will discuss the representation of space in the posterior parietal cortex. Encoding spatial locations is an essential step in sensory-motor transformations. Since movements are made to locations in space, these locations should be coded invariant of eye and head position or the sensory modality signaling the target for a movement Data will be reviewed demonstrating that there exists in the posterior parietal cortex an abstract representation of space that is constructed from the integration of visual, auditory, vestibular, eye position, and propriocaptive head position signals. This representation is in the form of a population code and the above signals are not combined in a haphazard fashion. Rather, they are brought together using a specific operation to form "planar gain fields" that are the common foundation of the population code for the neural construct of space

    The auditory cortex of the bat Phyllostomus discolor: Localization and organization of basic response properties

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The mammalian auditory cortex can be subdivided into various fields characterized by neurophysiological and neuroarchitectural properties and by connections with different nuclei of the thalamus. Besides the primary auditory cortex, echolocating bats have cortical fields for the processing of temporal and spectral features of the echolocation pulses. This paper reports on location, neuroarchitecture and basic functional organization of the auditory cortex of the microchiropteran bat <it>Phyllostomus discolor </it>(family: Phyllostomidae).</p> <p>Results</p> <p>The auditory cortical area of <it>P. discolor </it>is located at parieto-temporal portions of the neocortex. It covers a rostro-caudal range of about 4800 μm and a medio-lateral distance of about 7000 μm on the flattened cortical surface.</p> <p>The auditory cortices of ten adult <it>P. discolor </it>were electrophysiologically mapped in detail. Responses of 849 units (single neurons and neuronal clusters up to three neurons) to pure tone stimulation were recorded extracellularly. Cortical units were characterized and classified depending on their response properties such as best frequency, auditory threshold, first spike latency, response duration, width and shape of the frequency response area and binaural interactions.</p> <p>Based on neurophysiological and neuroanatomical criteria, the auditory cortex of <it>P. discolor </it>could be subdivided into anterior and posterior ventral fields and anterior and posterior dorsal fields. The representation of response properties within the different auditory cortical fields was analyzed in detail. The two ventral fields were distinguished by their tonotopic organization with opposing frequency gradients. The dorsal cortical fields were not tonotopically organized but contained neurons that were responsive to high frequencies only.</p> <p>Conclusion</p> <p>The auditory cortex of <it>P. discolor </it>resembles the auditory cortex of other phyllostomid bats in size and basic functional organization. The tonotopically organized posterior ventral field might represent the primary auditory cortex and the tonotopically organized anterior ventral field seems to be similar to the anterior auditory field of other mammals. As most energy of the echolocation pulse of <it>P. discolor </it>is contained in the high-frequency range, the non-tonotopically organized high-frequency dorsal region seems to be particularly important for echolocation.</p

    Contextual modulation of primary visual cortex by auditory signals

    Get PDF
    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli &amp; Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’

    The sound of concepts: The link between auditory and conceptual brain systems

    Get PDF
    Concepts in long-term memory are important building blocks of human cognition and are the basis for object recognition, language and thought. While it is well accepted that concepts are comprised of features related to sensory object attributes, it is still unclear how these features are represented in the brain. Of central interest is whether concepts are essentially grounded in perception. This would imply a common neuroanatomical substrate for perceptual and conceptual processing. Here we show using functional magnetic resonance imaging and recordings of event-related potentials that acoustic conceptual features rapidly recruit auditory areas even when implicitly presented through visual words. Recognizing words denoting objects for which acoustic features are highly relevant (e.g. &#x22;telephone&#x22;) suffices to ignite cell assemblies in the posterior superior and middle temporal gyrus (pSTG/MTG) that were also activated by listening to real sounds. Activity in pSTG/MTG had an onset of 150 ms and increased parametrically as a function of acoustic feature relevance. Both findings suggest a conceptual origin of this effect rather than post-conceptual strategies such as imagery. The presently demonstrated link between auditory and conceptual brain systems parallels observations in other memory systems suggesting that modality-specificity represents a general organizational principle in cortical memory representation. The understanding of concepts as a partial reinstatement of brain activity during perception stresses the necessity of rich sensory experiences for concept acquisition. The modality-specific nature of concepts could also explain the difficulties in achieving a consensus about overall definitions of abstract concepts such as freedom or justice unless embedded in a concrete, experienced situation

    Speaker Normalization Using Cortical Strip Maps: A Neural Model for Steady State vowel Categorization

    Full text link
    Auditory signals of speech are speaker-dependent, but representations of language meaning are speaker-independent. The transformation from speaker-dependent to speaker-independent language representations enables speech to be learned and understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitch-independent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by Adaptive Resonance Theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [J. Acoust. Soc. Am. 24, 175-184 (1952)] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Representation of Sound Categories in Auditory Cortical Maps

    Full text link
    We used functional magnetic resonance imaging (fMRI) to investigate the representation of sound categories in human auditory cortex. Experiment 1 investigated the representation of prototypical and non-prototypical examples of a vowel sound. Listening to prototypical examples of a vowel resulted in less auditory cortical activation than listening to nonprototypical examples. Experiments 2 and 3 investigated the effects of categorization training and discrimination training with novel non-speech sounds on auditory cortical representations. The two training tasks were shown to have opposite effects on the auditory cortical representation of sounds experienced during training: discrimination training led to an increase in the amount of activation caused by the training stimuli, whereas categorization training led to decreased activation. These results indicate that the brain efficiently shifts neural resources away from regions of acoustic space where discrimination between sounds is not behaviorally important (e.g., near the center of a sound category) and toward regions where accurate discrimination is needed. The results also provide a straightforward neural account of learned aspects of categorical perception: sounds from the center of a category are more difficult to discriminate from each other than sounds near category boundaries because they are represented by fewer cells in the auditory cortical areas.National Institute on Deafness and Other Communication Disorders (R01 DC02852

    Investigating the Neural Basis of Audiovisual Speech Perception with Intracranial Recordings in Humans

    Get PDF
    Speech is inherently multisensory, containing auditory information from the voice and visual information from the mouth movements of the talker. Hearing the voice is usually sufficient to understand speech, however in noisy environments or when audition is impaired due to aging or disabilities, seeing mouth movements greatly improves speech perception. Although behavioral studies have well established this perceptual benefit, it is still not clear how the brain processes visual information from mouth movements to improve speech perception. To clarify this issue, I studied the neural activity recorded from the brain surfaces of human subjects using intracranial electrodes, a technique known as electrocorticography (ECoG). First, I studied responses to noisy speech in the auditory cortex, specifically in the superior temporal gyrus (STG). Previous studies identified the anterior parts of the STG as unisensory, responding only to auditory stimulus. On the other hand, posterior parts of the STG are known to be multisensory, responding to both auditory and visual stimuli, which makes it a key region for audiovisual speech perception. I examined how these different parts of the STG respond to clear versus noisy speech. I found that noisy speech decreased the amplitude and increased the across-trial variability of the response in the anterior STG. However, possibly due to its multisensory composition, posterior STG was not as sensitive to auditory noise as the anterior STG and responded similarly to clear and noisy speech. I also found that these two response patterns in the STG were separated by a sharp boundary demarcated by the posterior-most portion of the Heschl’s gyrus. Second, I studied responses to silent speech in the visual cortex. Previous studies demonstrated that visual cortex shows response enhancement when the auditory component of speech is noisy or absent, however it was not clear which regions of the visual cortex specifically show this response enhancement and whether this response enhancement is a result of top-down modulation from a higher region. To test this, I first mapped the receptive fields of different regions in the visual cortex and then measured their responses to visual (silent) and audiovisual speech stimuli. I found that visual regions that have central receptive fields show greater response enhancement to visual speech, possibly because these regions receive more visual information from mouth movements. I found similar response enhancement to visual speech in frontal cortex, specifically in the inferior frontal gyrus, premotor and dorsolateral prefrontal cortices, which have been implicated in speech reading in previous studies. I showed that these frontal regions display strong functional connectivity with visual regions that have central receptive fields during speech perception

    The human 'pitch center' responds differently to iterated noise and Huggins pitch

    Get PDF
    A magnetoencephalographic marker for pitch analysis (the pitch onset response) has been reported for different types of pitch-evoking stimuli, irrespective of whether the acoustic cues for pitch are monaurally or binaurally produced. It is claimed that the pitch onset response reflects a common cortical representation for pitch, putatively in lateral Heschl's gyrus. The result of this functional MRI study sheds doubt on this assertion. We report a direct comparison between iterated ripple noise and Huggins pitch in which we reveal a different pattern of auditory cortical activation associated with each pitch stimulus, even when individual variability in structure-function relations is accounted for. Our results suggest it may be premature to assume that lateral Heschl's gyrus is a universal pitch center
    corecore