32 research outputs found

    Top-down and bottom-up modulation in processing bimodal face/voice stimuli

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Processing of multimodal information is a critical capacity of the human brain, with classic studies showing bimodal stimulation either facilitating or interfering in perceptual processing. Comparing activity to congruent and incongruent bimodal stimuli can reveal sensory dominance in particular cognitive tasks.</p> <p>Results</p> <p>We investigated audiovisual interactions driven by stimulus properties (bottom-up influences) or by task (top-down influences) on congruent and incongruent simultaneously presented faces and voices while ERPs were recorded. Subjects performed gender categorisation, directing attention either to faces or to voices and also judged whether the face/voice stimuli were congruent in terms of gender. Behaviourally, the unattended modality affected processing in the attended modality: the disruption was greater for attended voices. ERPs revealed top-down modulations of early brain processing (30-100 ms) over unisensory cortices. No effects were found on N170 or VPP, but from 180-230 ms larger right frontal activity was seen for incongruent than congruent stimuli.</p> <p>Conclusions</p> <p>Our data demonstrates that in a gender categorisation task the processing of faces dominate over the processing of voices. Brain activity showed different modulation by top-down and bottom-up information. Top-down influences modulated early brain activity whereas bottom-up interactions occurred relatively late.</p

    Association of the distinct visual representations of faces and names: a PET activation study.

    No full text
    A PET study of seven normal individuals was carried out to investigate the neural populations involved in the retrieval of the visual representation of a face when presented with an associated name, and conversely. Face-name associations were studied by means of four experimental matching conditions, including the retrieval of previously learned (1) name-name (NN), (2) face-face (FF), (3) name-face (NF), and (4) face-name (FN) associations, as well as a resting scan with eyes closed. Before PET images acquisition, subjects were presented with 24 unknown face-name associations to encode in 12 male/female couples. During PET scanning, their task was to decide whether the presented pair was a previously learned association. The right fusiform gyrus was strongly activated in FF condition as compared to NN and Rest conditions. However, no specific activations were found for NN condition relative to FF condition. A network of three areas distributed in the left hemisphere, both active in (NF-FF) and (FN-NN) comparisons, was interpreted as the locus of the integration of visual faces and names representations. These three regions were localized in the inferior frontal gyrus (BA 45), the medial frontal gyrus (BA 6) and the supramarginal gyrus of the inferior parietal lobe (BA 40). An interactive model accounting for these results, with BA 40 seen as an amodal binding region, is proposed.Journal ArticleResearch Support, Non-U.S. Gov'tinfo:eu-repo/semantics/publishe
    corecore