73 research outputs found

    Frontal and temporal dysfunction of auditory stimulus processing in schizophrenia

    Get PDF
    Attentiondeficits have been consistently described in schizophrenia. Functional neuroimaging and electrophysiological studies have focused on anterior cingulate cortex (ACC) dysfunction as a possible mediator. However, recent basic research has suggested that the effect of attention is also observed as a relative amplification of activity in modality-associated cortical areas. In the present study, the question was addressed whether an amplification deficit is seen in the auditory cortex of schizophrenic patients during an attention-requiring choice reaction task. Twenty-one drug-free schizophrenic patients and 21 age- and sex-matched healthy controls were studied (32-channel EEG). The underlying generators of the event-related N1 component were separated in neuroanatomic space using a minimum-norm (LORETA) and a multiple dipole (BESA) approach. Both methods revealed activation in the primary auditory cortex (peak latency ≈ 100 ms) and in the area of the ACC (peak latency ≈ 130 ms). In addition, the adapted multiple dipole model also showed a temporal-radial source activation in nonprimary auditory areas (peak latency ≈ 140 ms). In schizophrenic patients, significant activation deficits were found in the ACC as well as in the left nonprimary auditory areas that differentially correlated with negative and positive symptoms. The results suggest that (1) the source in the nonprimary auditory cortex is detected only with a multiple dipole approach and (2) that the N1 generators in the ACC and in the nonprimary auditory cortex are dysfunctional in schizophrenia. This would be in line with the notion that attention deficits in schizophrenia involve an extended cortical network

    Efficient Visual Search from Synchronized Auditory Signals Requires Transient Audiovisual Events

    Get PDF
    BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony) of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps) we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time

    Speech Cues Contribute to Audiovisual Spatial Integration

    Get PDF
    Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral ‘what’ and dorsal ‘where’ pathways

    The Impact of Spatial Incongruence on an Auditory-Visual Illusion

    Get PDF
    The sound-induced flash illusion is an auditory-visual illusion--when a single flash is presented along with two or more beeps, observers report seeing two or more flashes. Previous research has shown that the illusion gradually disappears as the temporal delay between auditory and visual stimuli increases, suggesting that the illusion is consistent with existing temporal rules of neural activation in the superior colliculus to multisensory stimuli. However little is known about the effect of spatial incongruence, and whether the illusion follows the corresponding spatial rule. If the illusion occurs less strongly when auditory and visual stimuli are separated, then integrative processes supporting the illusion must be strongly dependant on spatial congruence. In this case, the illusion would be consistent with both the spatial and temporal rules describing response properties of multisensory neurons in the superior colliculus.status: publishe

    An extended multisensory temporal binding window in autism spectrum disorders

    Get PDF
    Autism spectrum disorders (ASD) form a continuum of neurodevelopmental disorders, characterized by deficits in communication and reciprocal social interaction, as well as by repetitive behaviors and restricted interests. Sensory disturbances are also frequently reported in clinical and autobiographical accounts. However, surprisingly few empirical studies have characterized the fundamental features of sensory and multisensory processing in ASD. The current study is structured to test for potential differences in multisensory temporal function in ASD by making use of a temporally dependent, low-level multisensory illusion. In this illusion, the presentation of a single flash of light accompanied by multiple sounds often results in the illusory perception of multiple flashes. By systematically varying the temporal structure of the audiovisual stimuli, a “temporal window” within which these stimuli are likely to be bound into a single perceptual entity can be defined. The results of this study revealed that children with ASD report the flash-beep illusion over an extended range of stimulus onset asynchronies relative to children with typical development, suggesting that children with ASD have altered multisensory temporal function. These findings provide valuable new insights into our understanding of sensory processing in ASD and may hold promise for the development of more sensitive diagnostic measures and improved remediation strategies

    What Happens in Between? Human Oscillatory Brain Activity Related to Crossmodal Spatial Cueing

    Get PDF
    Previous studies investigated the effects of crossmodal spatial attention by comparing the responses to validly versus invalidly cued target stimuli. Dynamics of cortical rhythms in the time interval between cue and target might contribute to cue effects on performance. Here, we studied the influence of spatial attention on ongoing oscillatory brain activity in the interval between cue and target onset. In a first experiment, subjects underwent periods of tactile stimulation (cue) followed by visual stimulation (target) in a spatial cueing task as well as tactile stimulation as a control. In a second experiment, cue validity was modified to be 50%, 75%, or else 25%, to separate effects of exogenous shifts of attention caused by tactile stimuli from that of endogenous shifts. Tactile stimuli produced: 1) a stronger lateralization of the sensorimotor beta-rhythm rebound (15–22 Hz) after tactile stimuli serving as cues versus not serving as cues; 2) a suppression of the occipital alpha-rhythm (7–13 Hz) appearing only in the cueing task (this suppression was stronger contralateral to the endogenously attended side and was predictive of behavioral success); 3) an increase of prefrontal gamma-activity (25–35 Hz) specifically in the cueing task. We measured cue-related modulations of cortical rhythms which may accompany crossmodal spatial attention, expectation or decision, and therefore contribute to cue validity effects. The clearly lateralized alpha suppression after tactile cues in our data indicates its dependence on endogenous rather than exogenous shifts of visuo-spatial attention following a cue independent of its modality

    Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses

    Get PDF
    In an everyday social interaction we automatically integrate another’s facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input—a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion

    Monkeys and Humans Share a Common Computation for Face/Voice Integration

    Get PDF
    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates
    corecore