160 research outputs found

    Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli

    Get PDF
    The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125Β ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75Β ms, by 75–25Β ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established

    Bayesian mapping of pulmonary tuberculosis in Antananarivo, Madagascar

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Tuberculosis (TB), an infectious disease caused by the <it>Mycobacterium tuberculosis </it>is endemic in Madagascar. The capital, Antananarivo is the most seriously affected area. TB had a non-random spatial distribution in this setting, with clustering in the poorer areas. The aim of this study was to explore this pattern further by a Bayesian approach, and to measure the associations between the spatial variation of TB risk and national control program indicators for all neighbourhoods.</p> <p>Methods</p> <p>Combination of a Bayesian approach and a generalized linear mixed model (GLMM) was developed to produce smooth risk maps of TB and to model relationships between TB new cases and national TB control program indicators. The TB new cases were collected from records of the 16 Tuberculosis Diagnostic and Treatment Centres (DTC) of the city from 2004 to 2006. And five TB indicators were considered in the analysis: number of cases undergoing retreatment, number of patients with treatment failure and those suffering relapse after the completion of treatment, number of households with more than one case, number of patients lost to follow-up, and proximity to a DTC.</p> <p>Results</p> <p>In Antananarivo, 43.23% of the neighbourhoods had a standardized incidence ratio (SIR) above 1, of which 19.28% with a TB risk significantly higher than the average. Identified high TB risk areas were clustered and the distribution of TB was found to be associated mainly with the number of patients lost to follow-up (SIR: 1.10, CI 95%: 1.02-1.19) and the number of households with more than one case (SIR: 1.13, CI 95%: 1.03-1.24).</p> <p>Conclusion</p> <p>The spatial pattern of TB in Antananarivo and the contribution of national control program indicators to this pattern highlight the importance of the data recorded in the TB registry and the use of spatial approaches for assessing the epidemiological situation for TB. Including these variables into the model increases the reproducibility, as these data are already available for individual DTCs. These findings may also be useful for guiding decisions related to disease control strategies.</p

    How Bodies and Voices Interact in Early Emotion Perception

    Get PDF
    Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta–band oscillations (15–25 Hz) primarily reflecting biological motion perception was modulated 200–400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing

    Neural responses in parietal and occipital areas in response to visual events are modulated by prior multisensory stimuli

    Get PDF
    The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual 'flash-beep' illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes--an early timeframe, from 130-160 ms, and a late timeframe, from 300-320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus.status: publishe

    Speech Cues Contribute to Audiovisual Spatial Integration

    Get PDF
    Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral β€˜what’ and dorsal β€˜where’ pathways

    The Impact of Spatial Incongruence on an Auditory-Visual Illusion

    Get PDF
    The sound-induced flash illusion is an auditory-visual illusion--when a single flash is presented along with two or more beeps, observers report seeing two or more flashes. Previous research has shown that the illusion gradually disappears as the temporal delay between auditory and visual stimuli increases, suggesting that the illusion is consistent with existing temporal rules of neural activation in the superior colliculus to multisensory stimuli. However little is known about the effect of spatial incongruence, and whether the illusion follows the corresponding spatial rule. If the illusion occurs less strongly when auditory and visual stimuli are separated, then integrative processes supporting the illusion must be strongly dependant on spatial congruence. In this case, the illusion would be consistent with both the spatial and temporal rules describing response properties of multisensory neurons in the superior colliculus.status: publishe

    Attentional modulations of the early and later stages of the neural processing of visual completion

    Get PDF
    The brain effortlessly recognizes objects even when the visual information belonging to an object is widely separated, as well demonstrated by the Kanizsa-type illusory contours (ICs), in which a contour is perceived despite the fragments of the contour being separated by gaps. Such large-range visual completion has long been thought to be preattentive, whereas its dependence on top-down influences remains unclear. Here, we report separate modulations by spatial attention and task relevance on the neural activities in response to the ICs. IC-sensitive event-related potentials that were localized to the lateral occipital cortex were modulated by spatial attention at an early processing stage (130–166β€…ms after stimulus onset) and modulated by task relevance at a later processing stage (234–290β€…ms). These results not only demonstrate top-down attentional influences on the neural processing of ICs but also elucidate the characteristics of the attentional modulations that occur in different phases of IC processing
    • …
    corecore