458 research outputs found

    Who is that? Brain networks and mechanisms for identifying individuals

    Get PDF
    Social animals can identify conspecifics by many forms of sensory input. However, whether the neuronal computations that support this ability to identify individuals rely on modality-independent convergence or involve ongoing synergistic interactions along the multiple sensory streams remains controversial. Direct neuronal measurements at relevant brain sites could address such questions, but this requires better bridging the work in humans and animal models. Here, we overview recent studies in nonhuman primates on voice and face identity-sensitive pathways and evaluate the correspondences to relevant findings in humans. This synthesis provides insights into converging sensory streams in the primate anterior temporal lobe (ATL) for identity processing. Furthermore, we advance a model and suggest how alternative neuronal mechanisms could be tested

    Audiovisual temporal correspondence modulates human multisensory superior temporal sulcus plus primary sensory cortices

    Get PDF
    The brain should integrate related but not unrelated information from different senses. Temporal patterning of inputs to different modalities may provide critical information about whether those inputs are related or not. We studied effects of temporal correspondence between auditory and visual streams on human brain activity with functional magnetic resonance imaging ( fMRI). Streams of visual flashes with irregularly jittered, arrhythmic timing could appear on right or left, with or without a stream of auditory tones that coincided perfectly when present ( highly unlikely by chance), were noncoincident with vision ( different erratic, arrhythmic pattern with same temporal statistics), or an auditory stream appeared alone. fMRI revealed blood oxygenation level-dependent ( BOLD) increases in multisensory superior temporal sulcus (mSTS), contralateral to a visual stream when coincident with an auditory stream, and BOLD decreases for noncoincidence relative to unisensory baselines. Contralateral primary visual cortex and auditory cortex were also affected by audiovisual temporal correspondence or noncorrespondence, as confirmed in individuals. Connectivity analyses indicated enhanced influence from mSTS on primary sensory areas, rather than vice versa, during audiovisual correspondence. Temporal correspondence between auditory and visual streams affects a network of both multisensory ( mSTS) and sensory-specific areas in humans, including even primary visual and auditory cortex, with stronger responses for corresponding and thus related audiovisual inputs

    Review: Do the Different Sensory Areas within the Cat Anterior Ectosylvian Sulcal Cortex Collectively Represent a Network Multisensory Hub?

    Get PDF
    Current theory supports that the numerous functional areas of the cerebral cortex are organized and function as a network. Using connectional databases and computational approaches, the cerebral network has been demonstrated to exhibit a hierarchical structure composed of areas, clusters and, ultimately, hubs. Hubs are highly connected, higher-order regions that also facilitate communication between different sensory modalities. One region computationally identified network hub is the visual area of the Anterior Ectosylvian Sulcal cortex (AESc) of the cat. The Anterior Ectosylvian Visual area (AEV) is but one component of the AESc that also includes the auditory (Field of the Anterior Ectosylvian Sulcus - FAES) and somatosensory (Fourth somatosensory representation - SIV). To better understand the nature of cortical network hubs, the present report reviews the biological features of the AESc. Within the AESc, each area has extensive external cortical connections as well as among one another. Each of these core representations is separated by a transition zone characterized by bimodal neurons that share sensory properties of both adjoining core areas. Finally, core and transition zones are underlain by a continuous sheet of layer 5 neurons that project to common output structures. Altogether, these shared properties suggest that the collective AESc region represents a multiple sensory/multisensory cortical network hub. Ultimately, such an interconnected, composite structure adds complexity and biological detail to the understanding of cortical network hubs and their function in cortical processing

    Cortical mechanisms of seeing and hearing speech

    Get PDF
    In face-to-face communication speech is perceived through eyes and ears. The talker's articulatory gestures are seen and the speech sounds are heard simultaneously. Whilst acoustic speech can be often understood without visual information, viewing articulatory gestures aids hearing substantially in noisy conditions. On the other hand, speech can be understood, to some extent, by solely viewing articulatory gestures (i.e., by speechreading). In this thesis, electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) were utilized to disclose cortical mechanisms of seeing and hearing speech. One of the major challenges of modern cognitive neuroscience is to find out how the brain integrates inputs from different senses. In this thesis, integration of seen and heard speech was investigated using EEG and MEG. Multisensory interactions were found in the sensory-specific cortices at early latencies and in the multisensory regions at late latencies. Viewing other person's actions activate regions belonging to the human mirror neuron system (MNS) which are also activated when subjects themselves perform actions. Possibly, the human MNS enables simulation of other person's actions, which might be important also for speech recognition. In this thesis, it was demonstrated with MEG that seeing speech modulates activity in the mouth region of the primary somatosensory cortex (SI), suggesting that also the SI cortex is involved in simulation of other person's articulatory gestures during speechreading. The question whether there are speech-specific mechanisms in the human brain has been under scientific debate for decades. In this thesis, evidence for the speech-specific neural substrate in the left posterior superior temporal sulcus (STS) was obtained using fMRI. Activity in this region was found to be greater when subjects heard acoustic sine wave speech stimuli as speech than when they heard the same stimuli as non-speech.reviewe

    Species-dependent role of crossmodal connectivity among the primary sensory cortices

    Get PDF
    When a major sense is lost, crossmodal plasticity substitutes functional processing from the remaining, intact senses. Recent studies of deafness-induced crossmodal plasticity in different subregions of auditory cortex indicate that the phenomenon is largely based on the “unmasking” of existing inputs. However, there is not yet a consensus on the sources or effects of crossmodal inputs to primary sensory cortical areas. In the present review, a rigorous re-examination of the experimental literature indicates that connections between different primary sensory cortices consistently occur in rodents, while primary-to-primary projections are absent/inconsistent in non-rodents such as cats and monkeys. These observations suggest that crossmodal plasticity that involves primary sensory areas are likely to exhibit species-specific distinctions

    Neural pathways for visual speech perception

    Get PDF
    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA

    Investigating the Neural Basis of Audiovisual Speech Perception with Intracranial Recordings in Humans

    Get PDF
    Speech is inherently multisensory, containing auditory information from the voice and visual information from the mouth movements of the talker. Hearing the voice is usually sufficient to understand speech, however in noisy environments or when audition is impaired due to aging or disabilities, seeing mouth movements greatly improves speech perception. Although behavioral studies have well established this perceptual benefit, it is still not clear how the brain processes visual information from mouth movements to improve speech perception. To clarify this issue, I studied the neural activity recorded from the brain surfaces of human subjects using intracranial electrodes, a technique known as electrocorticography (ECoG). First, I studied responses to noisy speech in the auditory cortex, specifically in the superior temporal gyrus (STG). Previous studies identified the anterior parts of the STG as unisensory, responding only to auditory stimulus. On the other hand, posterior parts of the STG are known to be multisensory, responding to both auditory and visual stimuli, which makes it a key region for audiovisual speech perception. I examined how these different parts of the STG respond to clear versus noisy speech. I found that noisy speech decreased the amplitude and increased the across-trial variability of the response in the anterior STG. However, possibly due to its multisensory composition, posterior STG was not as sensitive to auditory noise as the anterior STG and responded similarly to clear and noisy speech. I also found that these two response patterns in the STG were separated by a sharp boundary demarcated by the posterior-most portion of the Heschl’s gyrus. Second, I studied responses to silent speech in the visual cortex. Previous studies demonstrated that visual cortex shows response enhancement when the auditory component of speech is noisy or absent, however it was not clear which regions of the visual cortex specifically show this response enhancement and whether this response enhancement is a result of top-down modulation from a higher region. To test this, I first mapped the receptive fields of different regions in the visual cortex and then measured their responses to visual (silent) and audiovisual speech stimuli. I found that visual regions that have central receptive fields show greater response enhancement to visual speech, possibly because these regions receive more visual information from mouth movements. I found similar response enhancement to visual speech in frontal cortex, specifically in the inferior frontal gyrus, premotor and dorsolateral prefrontal cortices, which have been implicated in speech reading in previous studies. I showed that these frontal regions display strong functional connectivity with visual regions that have central receptive fields during speech perception

    Combined diffusion-weighted and functional magnetic resonance imaging reveals a temporal-occipital network involved in auditory-visual object processing

    Get PDF
    Functional magnetic resonance imaging (MRI) showed that the superior temporal and occipital cortex are involved in multisensory integration. Probabilistic fiber tracking based on diffusion-weighted MRI suggests that multisensory processing is supported by white matter connections between auditory cortex and the temporal and occipital lobe. Here, we present a combined functional MRI and probabilistic fiber tracking study that reveals multisensory processing mechanisms that remained undetected by either technique alone. Ten healthy participants passively observed visually presented lip or body movements, heard speech or body action sounds, or were exposed to a combination of both. Bimodal stimulation engaged a temporal-occipital brain network including the multisensory superior temporal sulcus (msSTS), the lateral superior temporal gyrus (lSTG), and the extrastriate body area (EBA). A region-of-interest (ROI) analysis showed multisensory interactions (e.g., subadditive responses to bimodal compared to unimodal stimuli) in the msSTS, the lSTG, and the EBA region. Moreover, sounds elicited responses in the medial occipital cortex. Probabilistic tracking revealed white matter tracts between the auditory cortex and the medial occipital cortex, the inferior occipital cortex (IOC), and the superior temporal sulcus (STS). However, STS terminations of auditory cortex tracts showed limited overlap with the msSTS region. Instead, msSTS was connected to primary sensory regions via intermediate nodes in the temporal and occipital cortex. Similarly, the lSTG and EBA regions showed limited direct white matter connections but instead were connected via intermediate nodes. Our results suggest that multisensory processing in the STS is mediated by separate brain areas that form a distinct network in the lateral temporal and inferior occipital cortex

    Exploring the Structural and Functional Organization of the Dorsal Zone of Auditory Cortex in Hearing and Deafness

    Get PDF
    Recent neuroscientific research has focused on cortical plasticity, which refers to the ability of the cerebral cortex to adapt as a consequence of experience. Over the past decade, an increasing number of studies have convincingly shown that the brain can adapt to the loss or impairment of a sensory system, resulting in the expansion or heightened ability of the remaining senses. A particular region in cat auditory cortex, the dorsal zone (DZ), has been shown to mediate enhanced visual motion detection in deaf animals. The purpose of this thesis is to further our understanding of the structure and function of DZ in both hearing and deaf animals, in order to better understand how the brain compensates following insult or injury to a sensory system, with the ultimate goal of improving the utility of sensory prostheses. First, I demonstrate that the brain connectivity profile of animals with early- and late-onset deafness is similar to that of hearing animals, but the projection strength to visual brain regions involved in motion processing increases as a consequence of deafness. Second, I specifically evaluate the functional impact of the strongest auditory connections to area DZ using reversible deactivation and electrophysiological recordings. I show that projections that ultimately originate in primary auditory cortex (A1) form much of the basis of the response of DZ neurons to auditory stimulation. Third, I show that almost half of the neurons in DZ are influenced by visual or somatosensory information. I further demonstrate that this modulation by other sensory systems can have effects that are opposite in direction during different portions of the auditory response. I also show that techniques that incorporate the responses of multiple neurons, such as multi-unit and local field potential recordings, may vastly overestimate the degree to which multisensory processing occurs in a given brain region. Finally, I confirm that individual neurons in DZ become responsive mainly to visual stimulation following deafness. Together, these results shed light on the function and structural organization of area DZ in both hearing and deaf animals, and will contribute to the development of a comprehensive model of cross-modal plasticity

    Functional Imaging Reveals Visual Modulation of Specific Fields in Auditory Cortex

    Full text link
    corecore