74 research outputs found

    Explicit processing of verbal and spatial features during letter-location binding modulates oscillatory activity of a fronto-parietal network.

    Get PDF
    The present study investigated the binding of verbal and spatial features in immediate memory. In a recent study, we demonstrated incidental and asymmetrical letter-location binding effects when participants attended to letter features (but not when they attended to location features) that were associated with greater oscillatory activity over prefrontal and posterior regions during the retention period. We were interested to investigate whether the patterns of brain activity associated with the incidental binding of letters and locations observed when only the verbal feature is attended differ from those reflecting the binding resulting from the controlled/explicit processing of both verbal and spatial features. To achieve this, neural activity was recorded using magnetoencephalography (MEG) while participants performed two working memory tasks. Both tasks were identical in terms of their perceptual characteristics and only differed with respect to the task instructions. One of the tasks required participants to process both letters and locations. In the other, participants were instructed to memorize only the letters, regardless of their location. Time–frequency representation of MEG data based on the wavelet transform of the signals was calculated on a single trial basis during the maintenance period of both tasks. Critically, despite equivalent behavioural binding effects in both tasks, single and dual feature encoding relied on different neuroanatomical and neural oscillatory correlates. We propose that enhanced activation of an anterior–posterior dorsal network observed in the task requiring the processing of both features reflects the necessity for allocating greater resources to intentionally process verbal and spatial features in this task

    How Bodies and Voices Interact in Early Emotion Perception

    Get PDF
    Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta–band oscillations (15–25 Hz) primarily reflecting biological motion perception was modulated 200–400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing

    Bayesian mapping of pulmonary tuberculosis in Antananarivo, Madagascar

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Tuberculosis (TB), an infectious disease caused by the <it>Mycobacterium tuberculosis </it>is endemic in Madagascar. The capital, Antananarivo is the most seriously affected area. TB had a non-random spatial distribution in this setting, with clustering in the poorer areas. The aim of this study was to explore this pattern further by a Bayesian approach, and to measure the associations between the spatial variation of TB risk and national control program indicators for all neighbourhoods.</p> <p>Methods</p> <p>Combination of a Bayesian approach and a generalized linear mixed model (GLMM) was developed to produce smooth risk maps of TB and to model relationships between TB new cases and national TB control program indicators. The TB new cases were collected from records of the 16 Tuberculosis Diagnostic and Treatment Centres (DTC) of the city from 2004 to 2006. And five TB indicators were considered in the analysis: number of cases undergoing retreatment, number of patients with treatment failure and those suffering relapse after the completion of treatment, number of households with more than one case, number of patients lost to follow-up, and proximity to a DTC.</p> <p>Results</p> <p>In Antananarivo, 43.23% of the neighbourhoods had a standardized incidence ratio (SIR) above 1, of which 19.28% with a TB risk significantly higher than the average. Identified high TB risk areas were clustered and the distribution of TB was found to be associated mainly with the number of patients lost to follow-up (SIR: 1.10, CI 95%: 1.02-1.19) and the number of households with more than one case (SIR: 1.13, CI 95%: 1.03-1.24).</p> <p>Conclusion</p> <p>The spatial pattern of TB in Antananarivo and the contribution of national control program indicators to this pattern highlight the importance of the data recorded in the TB registry and the use of spatial approaches for assessing the epidemiological situation for TB. Including these variables into the model increases the reproducibility, as these data are already available for individual DTCs. These findings may also be useful for guiding decisions related to disease control strategies.</p

    Oscillatory activity in prefrontal and posterior regions during implicit letter-location binding.

    Get PDF
    Many cognitive abilities involve the integration of information from different modalities, a process referred to as “binding.” It remains less clear, however, whether the creation of bound representations occurs in an involuntary manner, and whether the links between the constituent features of an object are symmetrical. We used magnetoencephalography to investigate whether oscillatory brain activity related to binding processes would be observed in conditions in which participants maintain one feature only (involuntary binding); and whether this activity varies as a function of the feature attended to by participants (binding asymmetry). Participants performed two probe recognition tasks that were identical in terms of their perceptual characteristics and only differed with respect to the instructions given (to memorize either consonants or locations). MEG data were reconstructed using a current source distribution estimation in the classical frequency bands. We observed implicit verbal–spatial binding only when participants successfully maintained the identity of consonants, which was associated with a selective increase in oscillatory activity over prefrontal regions in all frequency bands during the first half of the retention period and accompanied by increased activity in posterior brain regions. The increase in oscillatory activity in prefrontal areas was only observed during the verbal task, which suggests that this activity might be signaling neural processes specifically involved in cross-code binding. Current results are in agreement with proposals suggesting that the prefrontal cortex function as a “pointer” which indexes the features that belong together within an object

    Attentional modulations of the early and later stages of the neural processing of visual completion

    Get PDF
    The brain effortlessly recognizes objects even when the visual information belonging to an object is widely separated, as well demonstrated by the Kanizsa-type illusory contours (ICs), in which a contour is perceived despite the fragments of the contour being separated by gaps. Such large-range visual completion has long been thought to be preattentive, whereas its dependence on top-down influences remains unclear. Here, we report separate modulations by spatial attention and task relevance on the neural activities in response to the ICs. IC-sensitive event-related potentials that were localized to the lateral occipital cortex were modulated by spatial attention at an early processing stage (130–166 ms after stimulus onset) and modulated by task relevance at a later processing stage (234–290 ms). These results not only demonstrate top-down attentional influences on the neural processing of ICs but also elucidate the characteristics of the attentional modulations that occur in different phases of IC processing

    Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses

    Get PDF
    In an everyday social interaction we automatically integrate another’s facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input—a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion

    The COGs (context, object, and goals) in multisensory processing

    Get PDF
    Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and “top-down” control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer’s goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications

    Monkeys and Humans Share a Common Computation for Face/Voice Integration

    Get PDF
    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates

    A transition from unimodal to multimodal activations in four sensory modalities in humans: an electrophysiological study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To investigate the long-latency activities common to all sensory modalities, electroencephalographic responses to auditory (1000 Hz pure tone), tactile (electrical stimulation to the index finger), visual (simple figure of a star), and noxious (intra-epidermal electrical stimulation to the dorsum of the hand) stimuli were recorded from 27 scalp electrodes in 14 healthy volunteers.</p> <p>Results</p> <p>Results of source modeling showed multimodal activations in the anterior part of the cingulate cortex (ACC) and hippocampal region (Hip). The activity in the ACC was biphasic. In all sensory modalities, the first component of ACC activity peaked 30–56 ms later than the peak of the major modality-specific activity, the second component of ACC activity peaked 117–145 ms later than the peak of the first component, and the activity in Hip peaked 43–77 ms later than the second component of ACC activity.</p> <p>Conclusion</p> <p>The temporal sequence of activations through modality-specific and multimodal pathways was similar among all sensory modalities.</p
    corecore