10 research outputs found

    Causal Processes Underlying Unimodal and Multimodal Language

    Full text link
    Language, including speech production and perception, is a major cognitive function necessary for a healthy social and vocational outlook. It is reported that approximately 5-10% of the American population experience communication disorders which can manifest as hearing impairments, difficulty speaking, speech impairments such as stuttering, and more complex language disorders (Ruben, 2009). Given the high prevalence of communication disorders in the United States and the crucial role that language plays in everyday life, it is important to investigate the underlying neural processes and mechanisms that support this social function as well as the brain regions and networks that are involved. A deeper understanding of the mechanisms and structural correlates can help identify the numerous ways in which these functions may be impaired in individuals through disorder, disease, or injury. By understanding which specific components of the process are impacted by neural damage, researchers may gain greater insight into new ways to treat and rehabilitate language impairments as well as to promote the development of devices that can assist in living with these deficits. In this dissertation, I focus on two important aspects of language that are relevant to clinical deficits: semantic naming and audiovisual speech integration. Focusing on these two critical components of language, I discuss three lines of research that examine the causality of the brain regions involved in these unimodal and multimodal language functions. In Study 1, I employ a causal method, voxel lesion symptom mapping, in intrinsic brain tumor patients to show that the left middle temporal gyrus (MTG) is the primary locus of semantic naming. This finding is consistent with established findings in the stroke lesion literature and demonstrates the validity of the brain tumor model in lesion mapping. In Study 2, I extend the scope of language causality to audiovisual speech integration using the same brain tumor model. Audiovisual speech integration is a highly relevant form of multisensory integration. It allows the merging of information from various unisensory modalities into a single coherent percept and is an important part of how the brain processes sensory information. Using lesion mapping, I examine which brain regions are critically responsible for audiovisual speech integration behaviors to dissociate whether the merging of conflicting audiovisual speech and the processing of congruent audiovisual speech rely on the same audiovisual integration mechanism. This study challenges the widely held underlying assumption that these two forms of audiovisual processing reflect the same integration mechanism. Lastly, in Study 3, I extend the test of the causal brain regions involved in audiovisual speech to healthy individuals. In this study, single-pulse transcranial magnetic stimulation was applied to disrupt the cortical activity in the left posterior superior temporal sulcus (pSTS), a region largely believed to be the hub of multisensory speech processing. I show that inhibitory stimulation to this multisensory zone can disrupt the fusing of conflicting audiovisual speech while having no effect on the processing of congruent audiovisual speech. These findings point to a dissociation in neural mechanisms between the two audiovisual integration processes and demonstrate that the pSTS reflects only one of the multiple critical areas necessary for audiovisual speech interactions.PhDPsychologyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/178100/1/eunahn_1.pd

    Joint Encoding of Auditory Timing and Location in Visual Cortex

    No full text
    Co-occurring sounds can facilitate perception of spatially and temporally correspondent visual events. Separate lines of research have identified two putatively distinct neural mechanisms underlying two types of crossmodal facilitations: Whereas crossmodal phase resetting is thought to underlie enhancements based on temporal correspondences, lateralized occipital evoked potentials (ERPs) are thought to reflect enhancements based on spatial correspondences. Here, we sought to clarify the relationship between these two effects to assess whether they reflect two distinct mechanisms or, rather, two facets of the same underlying process. To identify the neural generators of each effect, we examined crossmodal responses to lateralized sounds in visually responsive cortex of 22 patients using electrocorticographic recordings. Auditory-driven phase reset and ERP responses in visual cortex displayed similar topography, revealing significant activity in pericalcarine, inferior occipital-temporal, and posterior parietal cortex, with maximal activity in lateral occipitotemporal cortex (potentially V5/hMT+). Laterality effects showed similar but less widespread topography. To test whether lateralized and nonlateralized components of crossmodal ERPs emerged from common or distinct neural generators, we compared responses throughout visual cortex. Visual electrodes responded to both contralateral and ipsilateral sounds with a contralateral bias, suggesting that previously observed laterality effects do not emerge from a distinct neural generator but rather reflect laterality-biased responses in the same neural populations that produce phase-resetting responses. These results suggest that crossmodal phase reset and ERP responses previously found to reflect spatial and temporal facilitation in visual cortex may reflect the same underlying mechanism. We propose a new unified model to account for these and previous results

    Visual cortex responds to sound onset and offset during passive listening

    No full text
    Sounds enhance our ability to detect, localize, and respond to co-occurring visual targets. Research suggests that sounds improve visual processing by resetting the phase of ongoing oscillations in visual cortex. However, it remains unclear what information is relayed from the auditory system to visual areas and if sounds modulate visual activity even in the absence of visual stimuli (e.g., during passive listening). Using intracranial electroencephalography (iEEG) in humans, we examined the sensitivity of visual cortex to three forms of auditory information during a passive listening task: auditory onset responses, auditory offset responses, and rhythmic entrainment to sounds. Because some auditory neurons respond to both sound onsets and offsets, visual timing and duration processing may benefit from each. In addition, if auditory entrainment information is relayed to visual cortex, it could support the processing of complex stimulus dynamics that are aligned between auditory and visual stimuli. Results demonstrate that in visual cortex, amplitude-modulated sounds elicited transient onset and offset responses in multiple areas, but no entrainment to sound modulation frequencies. These findings suggest that activity in visual cortex (as measured with iEEG in response to auditory stimuli) may not be affected by temporally fine-grained auditory stimulus dynamics during passive listening (though it remains possible that this signal may be observable with simultaneous auditory-visual stimuli). Moreover, auditory responses were maximal in low-level visual cortex, potentially implicating a direct pathway for rapid interactions between auditory and visual cortices. This mechanism may facilitate perception by time-locking visual computations to environmental events marked by auditory discontinuities

    Visual cortex responds to sound onset and offset during passive listening

    No full text
    Sounds enhance our ability to detect, localize, and respond to co-occurring visual targets. Research suggests that sounds improve visual processing by resetting the phase of ongoing oscillations in visual cortex. However, it remains unclear what information is relayed from the auditory system to visual areas and if sounds modulate visual activity even in the absence of visual stimuli (e.g., during passive listening). Using intracranial electroencephalography (iEEG) in humans, we examined the sensitivity of visual cortex to three forms of auditory information during a passive listening task: auditory onset responses, auditory offset responses, and rhythmic entrainment to sounds. Because some auditory neurons respond to both sound onsets and offsets, visual timing and duration processing may benefit from each. In addition, if auditory entrainment information is relayed to visual cortex, it could support the processing of complex stimulus dynamics that are aligned between auditory and visual stimuli. Results demonstrate that in visual cortex, amplitude-modulated sounds elicited transient onset and offset responses in multiple areas, but no entrainment to sound modulation frequencies. These findings suggest that activity in visual cortex (as measured with iEEG in response to auditory stimuli) may not be affected by temporally fine-grained auditory stimulus dynamics during passive listening (though it remains possible that this signal may be observable with simultaneous auditory-visual stimuli). Moreover, auditory responses were maximal in low-level visual cortex, potentially implicating a direct pathway for rapid interactions between auditory and visual cortices. This mechanism may facilitate perception by time-locking visual computations to environmental events marked by auditory discontinuities

    Visual cortex responds to transient sound changes without encoding complex auditory dynamics

    No full text
    Sounds enhance visual cortical sensitivity for co-occurring visual signals. Previous research has demonstrated that this facilitation occurs through crossmodal modulations of cortical oscillatory activity. However, the neural origin of these signals and auditory information conveyed by this mechanism remain poorly understood. Using intracranial electroencephalography (iEEG) in humans, we examined the sensitivity of visual cortex to three different forms of auditory information: rhythmic entrainment to sounds, auditory onset responses, and auditory offset responses. Subcortical auditory neurons exhibit frequency following behaviors in response to amplitude-modulated sounds, with oscillatory activity entrained at the rhythmic rate of the auditory signal. This auditory response is paralleled in the visual system by the entrainment of visual neurons to the rhythmic rate of flashing strobe lights. In contrast, ~20% of neurons in auditory cortex do not entrain to amplitude modulations but respond only to the onsets and/or offsets of auditory stimuli. In visual cortex, amplitude-modulated sounds elicited transient onset and offset responses in multiple areas, but no entrainment to the sounds’ modulations frequencies. These results suggest that auditory information conveyed to visual cortex does not include temporally fine-grained stimulus dynamics encoded by the auditory midbrain and thalamus but, rather, a temporally segmented representation of auditory events that emerges only in auditory cortex. Crossmodal responses were maximal in low-level visual cortex, potentially implicating a direct pathway for rapid interactions between low-to-mid-level auditory and visual cortices. This mechanism may facilitate perception by time-locking visual computations to environmental events marked by discontinuities in auditory input

    The adaptor protein LAD/TSAd mediates laminin-dependent T cell migration via association with the 67 kDa laminin binding protein

    No full text
    The adaptor protein, LAD/TSAd, plays essential roles in T cell activation. To further understand the functions of this protein, we performed yeast two-hybrid screening using TSAd as bait and identified 67 kDa laminin binding protein (LBP) as the interacting partner. Subsequently, TSAd-LBP interaction was confirmed in D1.1 T cell line. Upon costimulation by T cell receptor (TCR) plus laminin crosslinking or TCR plus integrin α6 crosslinking, LBP was coimmunoprecipitated with TSAd. Moreover, TCR plus laminin costimulation-dependent T cell migration was enhanced in D1.1 T cells overexpressing TSAd but was disrupted in D1.1 cells overexpressing dominant negative form of TSAd or TSAd shRNA. These data show that, upon TCR plus integrin costimulation, TSAd associates with LBP and mediates T lymphocyte migration
    corecore