60 research outputs found

    Explicit processing of verbal and spatial features during letter-location binding modulates oscillatory activity of a fronto-parietal network.

    Get PDF
    The present study investigated the binding of verbal and spatial features in immediate memory. In a recent study, we demonstrated incidental and asymmetrical letter-location binding effects when participants attended to letter features (but not when they attended to location features) that were associated with greater oscillatory activity over prefrontal and posterior regions during the retention period. We were interested to investigate whether the patterns of brain activity associated with the incidental binding of letters and locations observed when only the verbal feature is attended differ from those reflecting the binding resulting from the controlled/explicit processing of both verbal and spatial features. To achieve this, neural activity was recorded using magnetoencephalography (MEG) while participants performed two working memory tasks. Both tasks were identical in terms of their perceptual characteristics and only differed with respect to the task instructions. One of the tasks required participants to process both letters and locations. In the other, participants were instructed to memorize only the letters, regardless of their location. Time–frequency representation of MEG data based on the wavelet transform of the signals was calculated on a single trial basis during the maintenance period of both tasks. Critically, despite equivalent behavioural binding effects in both tasks, single and dual feature encoding relied on different neuroanatomical and neural oscillatory correlates. We propose that enhanced activation of an anterior–posterior dorsal network observed in the task requiring the processing of both features reflects the necessity for allocating greater resources to intentionally process verbal and spatial features in this task

    How Bodies and Voices Interact in Early Emotion Perception

    Get PDF
    Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta–band oscillations (15–25 Hz) primarily reflecting biological motion perception was modulated 200–400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing

    Bayesian mapping of pulmonary tuberculosis in Antananarivo, Madagascar

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Tuberculosis (TB), an infectious disease caused by the <it>Mycobacterium tuberculosis </it>is endemic in Madagascar. The capital, Antananarivo is the most seriously affected area. TB had a non-random spatial distribution in this setting, with clustering in the poorer areas. The aim of this study was to explore this pattern further by a Bayesian approach, and to measure the associations between the spatial variation of TB risk and national control program indicators for all neighbourhoods.</p> <p>Methods</p> <p>Combination of a Bayesian approach and a generalized linear mixed model (GLMM) was developed to produce smooth risk maps of TB and to model relationships between TB new cases and national TB control program indicators. The TB new cases were collected from records of the 16 Tuberculosis Diagnostic and Treatment Centres (DTC) of the city from 2004 to 2006. And five TB indicators were considered in the analysis: number of cases undergoing retreatment, number of patients with treatment failure and those suffering relapse after the completion of treatment, number of households with more than one case, number of patients lost to follow-up, and proximity to a DTC.</p> <p>Results</p> <p>In Antananarivo, 43.23% of the neighbourhoods had a standardized incidence ratio (SIR) above 1, of which 19.28% with a TB risk significantly higher than the average. Identified high TB risk areas were clustered and the distribution of TB was found to be associated mainly with the number of patients lost to follow-up (SIR: 1.10, CI 95%: 1.02-1.19) and the number of households with more than one case (SIR: 1.13, CI 95%: 1.03-1.24).</p> <p>Conclusion</p> <p>The spatial pattern of TB in Antananarivo and the contribution of national control program indicators to this pattern highlight the importance of the data recorded in the TB registry and the use of spatial approaches for assessing the epidemiological situation for TB. Including these variables into the model increases the reproducibility, as these data are already available for individual DTCs. These findings may also be useful for guiding decisions related to disease control strategies.</p

    Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses

    Get PDF
    In an everyday social interaction we automatically integrate another’s facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input—a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion

    Monkeys and Humans Share a Common Computation for Face/Voice Integration

    Get PDF
    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates

    A transition from unimodal to multimodal activations in four sensory modalities in humans: an electrophysiological study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To investigate the long-latency activities common to all sensory modalities, electroencephalographic responses to auditory (1000 Hz pure tone), tactile (electrical stimulation to the index finger), visual (simple figure of a star), and noxious (intra-epidermal electrical stimulation to the dorsum of the hand) stimuli were recorded from 27 scalp electrodes in 14 healthy volunteers.</p> <p>Results</p> <p>Results of source modeling showed multimodal activations in the anterior part of the cingulate cortex (ACC) and hippocampal region (Hip). The activity in the ACC was biphasic. In all sensory modalities, the first component of ACC activity peaked 30–56 ms later than the peak of the major modality-specific activity, the second component of ACC activity peaked 117–145 ms later than the peak of the first component, and the activity in Hip peaked 43–77 ms later than the second component of ACC activity.</p> <p>Conclusion</p> <p>The temporal sequence of activations through modality-specific and multimodal pathways was similar among all sensory modalities.</p

    Encoding of Naturalistic Stimuli by Local Field Potential Spectra in Networks of Excitatory and Inhibitory Neurons

    Get PDF
    Recordings of local field potentials (LFPs) reveal that the sensory cortex displays rhythmic activity and fluctuations over a wide range of frequencies and amplitudes. Yet, the role of this kind of activity in encoding sensory information remains largely unknown. To understand the rules of translation between the structure of sensory stimuli and the fluctuations of cortical responses, we simulated a sparsely connected network of excitatory and inhibitory neurons modeling a local cortical population, and we determined how the LFPs generated by the network encode information about input stimuli. We first considered simple static and periodic stimuli and then naturalistic input stimuli based on electrophysiological recordings from the thalamus of anesthetized monkeys watching natural movie scenes. We found that the simulated network produced stimulus-related LFP changes that were in striking agreement with the LFPs obtained from the primary visual cortex. Moreover, our results demonstrate that the network encoded static input spike rates into gamma-range oscillations generated by inhibitory–excitatory neural interactions and encoded slow dynamic features of the input into slow LFP fluctuations mediated by stimulus–neural interactions. The model cortical network processed dynamic stimuli with naturalistic temporal structure by using low and high response frequencies as independent communication channels, again in agreement with recent reports from visual cortex responses to naturalistic movies. One potential function of this frequency decomposition into independent information channels operated by the cortical network may be that of enhancing the capacity of the cortical column to encode our complex sensory environment

    Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG

    Get PDF
    Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV&gt;[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals
    corecore