158 research outputs found
Effects of task difficulty on evoked gamma activity and ERPs in a visual discrimination task
Objective: The present study examined oscillatory brain activity of the EEG gamma band and event-related potentials (ERPs) with relation to the difficulty of a visual discrimination task. Methods: Three tasks with identical stimulus material were performed by 9 healthy subjects. The tasks comprised a passive control task, and an easy and a hard visual discrimination task, requiring discrimination of the color of circles. EEG was recorded from 26 electrodes. A wavelet transform based on Morlet wavelets was employed for the analysis of gamma activity. Results: Evoked EEG gamma activity was enhanced by both discrimination tasks as compared to the passive control task. Within the two discrimination tasks, the latency of the evoked gamma peak was delayed for the harder task. Higher amplitudes of the ERP components N170 and P300 were found in both discrimination tasks as compared to the passive task. The N2b, which showed a maximum activation at about 260 ms, was increased in the hard discrimination task as compared to the easy discrimination task. Conclusions: Our results indicate that early evoked gamma activity and N2b are related to the difficulty of visual discrimination processes. A delayed gamma activity in the hard task indicated a longer duration of stimulus processing, whereas the amplitude of the N2b directly indicates the level of task difficulty
Frontal and temporal dysfunction of auditory stimulus processing in schizophrenia
Attentiondeficits have been consistently described in schizophrenia. Functional neuroimaging and electrophysiological studies have focused on anterior cingulate cortex (ACC) dysfunction as a possible mediator. However, recent basic research has suggested that the effect of attention is also observed as a relative amplification of activity in modality-associated cortical areas. In the present study, the question was addressed whether an amplification deficit is seen in the auditory cortex of schizophrenic patients during an attention-requiring choice reaction task. Twenty-one drug-free schizophrenic patients and 21 age- and sex-matched healthy controls were studied (32-channel EEG). The underlying generators of the event-related N1 component were separated in neuroanatomic space using a minimum-norm (LORETA) and a multiple dipole (BESA) approach. Both methods revealed activation in the primary auditory cortex (peak latency ≈ 100 ms) and in the area of the ACC (peak latency ≈ 130 ms). In addition, the adapted multiple dipole model also showed a temporal-radial source activation in nonprimary auditory areas (peak latency ≈ 140 ms). In schizophrenic patients, significant activation deficits were found in the ACC as well as in the left nonprimary auditory areas that differentially correlated with negative and positive symptoms. The results suggest that (1) the source in the nonprimary auditory cortex is detected only with a multiple dipole approach and (2) that the N1 generators in the ACC and in the nonprimary auditory cortex are dysfunctional in schizophrenia. This would be in line with the notion that attention deficits in schizophrenia involve an extended cortical network
Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli
The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established
When a photograph can be heard: Vision activates the auditory cortex within 110 ms
As the makers of silent movies knew well, it is not necessary to provide an actual auditory stimulus to activate the sensation of sounds typically associated with what we are viewing. Thus, you could almost hear the neigh of Rodolfo Valentino's horse, even though the film was mute. Evidence is provided that the mere sight of a photograph associated with a sound can activate the associative auditory cortex. High-density ERPs were recorded in 15 participants while they viewed hundreds of perceptually matched images that were associated (or not) with a given sound. Sound stimuli were discriminated from non-sound stimuli as early as 110 ms. SwLORETA reconstructions showed common activation of ventral stream areas for both types of stimuli and of the associative temporal cortex, at the earliest stage, only for sound stimuli. The primary auditory cortex (BA41) was also activated by sound images after ∼ 200 ms
Auditory Cortex Tracks Both Auditory and Visual Stimulus Dynamics Using Low-Frequency Neuronal Phase Modulation
How is naturalistic multisensory information combined in the human brain? Based on MEG data we show that phase modulation of visual and auditory signals captures the dynamics of complex scenes
Recommended from our members
The prediction of visual stimuli influences auditory loudness discrimination
The brain combines information from different senses to improve performance on perceptual tasks. For instance, auditory processing is enhanced by the mere fact that a visual input is processed simultaneously. However, the sensory processing of one modality is itself subject to diverse influences. Namely, perceptual processing depends on the degree to which a stimulus is predicted. The present study investigated the extent to which the influence of one processing pathway on another pathway depends on whether or not the stimulation in this pathway is predicted. We used an action–effect paradigm to vary the match between incoming and predicted visual stimulation. Participants triggered a bimodal stimulus composed of a Gabor and a tone. The Gabor was either congruent or incongruent compared to an action–effect association that participants learned in an acquisition phase.We tested the influence of action–effect congruency on the loudness perception of the tone. We observed that an incongruent–task-irrelevant Gabor stimulus increases participant’s sensitivity to loudness discrimination. An identical result was obtained for a second condition in which the visual stimulus was predicted by a cue instead of an action. Our results suggest that prediction error is a driving factor of the crossmodal interplay between vision and audition
Bayesian mapping of pulmonary tuberculosis in Antananarivo, Madagascar
<p>Abstract</p> <p>Background</p> <p>Tuberculosis (TB), an infectious disease caused by the <it>Mycobacterium tuberculosis </it>is endemic in Madagascar. The capital, Antananarivo is the most seriously affected area. TB had a non-random spatial distribution in this setting, with clustering in the poorer areas. The aim of this study was to explore this pattern further by a Bayesian approach, and to measure the associations between the spatial variation of TB risk and national control program indicators for all neighbourhoods.</p> <p>Methods</p> <p>Combination of a Bayesian approach and a generalized linear mixed model (GLMM) was developed to produce smooth risk maps of TB and to model relationships between TB new cases and national TB control program indicators. The TB new cases were collected from records of the 16 Tuberculosis Diagnostic and Treatment Centres (DTC) of the city from 2004 to 2006. And five TB indicators were considered in the analysis: number of cases undergoing retreatment, number of patients with treatment failure and those suffering relapse after the completion of treatment, number of households with more than one case, number of patients lost to follow-up, and proximity to a DTC.</p> <p>Results</p> <p>In Antananarivo, 43.23% of the neighbourhoods had a standardized incidence ratio (SIR) above 1, of which 19.28% with a TB risk significantly higher than the average. Identified high TB risk areas were clustered and the distribution of TB was found to be associated mainly with the number of patients lost to follow-up (SIR: 1.10, CI 95%: 1.02-1.19) and the number of households with more than one case (SIR: 1.13, CI 95%: 1.03-1.24).</p> <p>Conclusion</p> <p>The spatial pattern of TB in Antananarivo and the contribution of national control program indicators to this pattern highlight the importance of the data recorded in the TB registry and the use of spatial approaches for assessing the epidemiological situation for TB. Including these variables into the model increases the reproducibility, as these data are already available for individual DTCs. These findings may also be useful for guiding decisions related to disease control strategies.</p
Attentional modulations of the early and later stages of the neural processing of visual completion
The brain effortlessly recognizes objects even when the visual information belonging to an object is widely separated, as well demonstrated by the Kanizsa-type illusory contours (ICs), in which a contour is perceived despite the fragments of the contour being separated by gaps. Such large-range visual completion has long been thought to be preattentive, whereas its dependence on top-down influences remains unclear. Here, we report separate modulations by spatial attention and task relevance on the neural activities in response to the ICs. IC-sensitive event-related potentials that were localized to the lateral occipital cortex were modulated by spatial attention at an early processing stage (130–166 ms after stimulus onset) and modulated by task relevance at a later processing stage (234–290 ms). These results not only demonstrate top-down attentional influences on the neural processing of ICs but also elucidate the characteristics of the attentional modulations that occur in different phases of IC processing
How Bodies and Voices Interact in Early Emotion Perception
Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta–band oscillations (15–25 Hz) primarily reflecting biological motion perception was modulated 200–400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing
- …