86 research outputs found

    Anatomical pathways for auditory memory II: information from rostral superior temporal gyrus to dorsolateral temporal pole and medial temporal cortex

    Get PDF
    Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10–20 min. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30–40 s. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys' auditory memory performance. The anatomical pathways for auditory memory may differ from those in vision. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC). We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG), and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY) and anterograde (10% BDA 10,000 mW) tracer injections in rSTG and the dorsolateral area 38DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex (EC), and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys

    The Role of Speech Production System in Audiovisual Speech Perception

    Get PDF
    Seeing the articulatory gestures of the speaker significantly enhances speech perception. Findings from recent neuroimaging studies suggest that activation of the speech motor system during lipreading enhance speech perception by tuning, in a top-down fashion, speech-sound processing in the superior aspects of the posterior temporal lobe. Anatomically, the superior-posterior temporal lobe areas receive connections from the auditory, visual, and speech motor cortical areas. Thus, it is possible that neuronal receptive fields are shaped during development to respond to speech-sound features that coincide with visual and motor speech cues, in contrast with the anterior/lateral temporal lobe areas that might process speech sounds predominantly based on acoustic cues. The superior-posterior temporal lobe areas have also been consistently associated with auditory spatial processing. Thus, the involvement of these areas in audiovisual speech perception might partly be explained by the spatial processing requirements when associating sounds, seen articulations, and one’s own motor movements. Tentatively, it is possible that the anterior “what” and posterior “where / how” auditory cortical processing pathways are parts of an interacting network, the instantaneous state of which determines what one ultimately perceives, as potentially reflected in the dynamics of oscillatory activity

    AN FMRI STUDY OF DEFAULT MODE NETWORK CONNECTIVITY IN COMATOSE PATIENTS

    Get PDF
    Functional connectivity within a resting state network of the brain, termed the default mode network (DMN), has been suggested to represent the neural correlate o f the stream of consciousness. Altered states of consciousness where awareness is thought to be absent could provide insight into the function o f the DMN. Here I examined the functional connectivity in the DMN in both reversible and irreversible coma using fMRI. Twelve healthy control subjects and thirteen comatose patients following cardiac arrest were included in the study. DMN connectivity was observed in healthy controls and two patients who regained consciousness. DMN connectivity was absent in the eleven patients who failed to regain consciousness. Functional connectivity in the DMN is preserved in the comatose patients who regained consciousness but absent in those who did not recover consciousness indicating that potentially the DMN is necessary but not sufficient to support consciousness

    Audiovisual integration in macaque face patch neurons

    Get PDF
    Primate social communication depends on the perceptual integration of visual and auditory cues, reflected in the multimodal mixing of sensory signals in certain cortical areas. The macaque cortical face patch network, identified through visual, face-selective responses measured with fMRI, is assumed to contribute to visual social interactions. However, whether face patch neurons are also influenced by acoustic information, such as the auditory component of a natural vocalization, remains unknown. Here, we recorded single-unit activity in the anterior fundus (AF) face patch, in the superior temporal sulcus, and anterior medial (AM) face patch, on the undersurface of the temporal lobe, in macaques presented with audiovisual, visual-only, and auditory-only renditions of natural movies of macaques vocalizing. The results revealed that 76% of neurons in face patch AF were significantly influenced by the auditory component of the movie, most often through enhancement of visual responses but sometimes in response to the auditory stimulus alone. By contrast, few neurons in face patch AM exhibited significant auditory responses or modulation. Control experiments in AF used an animated macaque avatar to demonstrate, first, that the structural elements of the face were often essential for audiovisual modulation and, second, that the temporal modulation of the acoustic stimulus was more important than its frequency spectrum. Together, these results identify a striking contrast between two face patches and specifically identify AF as playing a potential role in the integration of audiovisual cues during natural modes of social communication

    You talkin' to me? Communicative talker gaze activates left-lateralized superior temporal cortex during perception of degraded speech.

    Get PDF
    Neuroimaging studies of speech perception have consistently indicated a left-hemisphere dominance in the temporal lobes' responses to intelligible auditory speech signals (McGettigan and Scott, 2012). However, there are important communicative cues that cannot be extracted from auditory signals alone, including the direction of the talker's gaze. Previous work has implicated the superior temporal cortices in processing gaze direction, with evidence for predominantly right-lateralized responses (Carlin & Calder, 2013). The aim of the current study was to investigate whether the lateralization of responses to talker gaze differs in an auditory communicative context. Participants in a functional MRI experiment watched and listened to videos of spoken sentences in which the auditory intelligibility and talker gaze direction were manipulated factorially. We observed a left-dominant temporal lobe sensitivity to the talker's gaze direction, in which the left anterior superior temporal sulcus/gyrus and temporal pole showed an enhanced response to direct gaze - further investigation revealed that this pattern of lateralization was modulated by auditory intelligibility. Our results suggest flexibility in the distribution of neural responses to social cues in the face within the context of a challenging speech perception task

    Advances in the neurocognition of music and language

    Get PDF

    Doctor of Philosophy

    Get PDF
    dissertationThe primate auditory system is responsible for analyzing complex patterns of pressure differences and then synthesizing this information into a behaviorally relevant representation of the external world. How the auditory cortex accomplishes this complex task is unknown. This thesis examines the neural mechanisms underlying auditory perception in the primate auditory cortex, focusing on the neural representation of communication sounds. This thesis is composed of three studies of auditory cortical processing in the macaque and human. The first examines coding in primary and tertiary auditory cortex as it relates to the possibility for developing a stimulating auditory neural prosthesis. The second study applies an information theoretic approach to understanding information transfer between primary and tertiary auditory cortex. The final study examines visual influences on human tertiary auditory cortical processing during illusory audiovisual speech perception. Together, these studies provide insight into the cortical physiology underlying sound perception and insight into the creation of a stimulating cortical neural prosthesis for the deaf

    On the encoding of natural music in computational models and human brains

    Get PDF
    This article discusses recent developments and advances in the neuroscience of music to understand the nature of musical emotion. In particular, it highlights how system identification techniques and computational models of music have advanced our understanding of how the human brain processes the textures and structures of music and how the processed information evokes emotions. Musical models relate physical properties of stimuli to internal representations called features, and predictive models relate features to neural or behavioral responses and test their predictions against independent unseen data. The new frameworks do not require orthogonalized stimuli in controlled experiments to establish reproducible knowledge, which has opened up a new wave of naturalistic neuroscience. The current review focuses on how this trend has transformed the domain of the neuroscience of music

    Tonotopic maps in human auditory cortex using arterial spin labeling

    Get PDF
    A tonotopic organization of the human auditory cortex (AC) has been reliably found by neuroimaging studies. However, a full characterization and parcellation of the AC is still lacking. In this study, we employed pseudo-continuous arterial spin labeling (pCASL) to map tonotopy and voice selective regions using, for the first time, cerebral blood flow (CBF). We demonstrated the feasibility of CBF-based tonotopy and found a good agreement with BOLD signal-based tonotopy, despite the lower contrast-to-noise ratio of CBF. Quantitative perfusion mapping of baseline CBF showed a region of high perfusion centered on Heschl's gyrus and corresponding to the main high-low-high frequency gradients, co-located to the presumed primary auditory core and suggesting baseline CBF as a novel marker for AC parcellation. Furthermore, susceptibility weighted imaging was employed to investigate the tissue specificity of CBF and BOLD signal and the possible venous bias of BOLD-based tonotopy. For BOLD only active voxels, we found a higher percentage of vein contamination than for CBF only active voxels. Taken together, we demonstrated that both baseline and stimulus-induced CBF is an alternative fMRI approach to the standard BOLD signal to study auditory processing and delineate the functional organization of the auditory cortex. Hum Brain Mapp, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc
    • …
    corecore