22 research outputs found

    Changes in Early Cortical Visual Processing Predict Enhanced Reactivity in Deaf Individuals

    Get PDF
    Individuals with profound deafness rely critically on vision to interact with their environment. Improvement of visual performance as a consequence of auditory deprivation is assumed to result from cross-modal changes occurring in late stages of visual processing. Here we measured reaction times and event-related potentials (ERPs) in profoundly deaf adults and hearing controls during a speeded visual detection task, to assess to what extent the enhanced reactivity of deaf individuals could reflect plastic changes in the early cortical processing of the stimulus. We found that deaf subjects were faster than hearing controls at detecting the visual targets, regardless of their location in the visual field (peripheral or peri-foveal). This behavioural facilitation was associated with ERP changes starting from the first detectable response in the striate cortex (C1 component) at about 80 ms after stimulus onset, and in the P1 complex (100–150 ms). In addition, we found that P1 peak amplitudes predicted the response times in deaf subjects, whereas in hearing individuals visual reactivity and ERP amplitudes correlated only at later stages of processing. These findings show that long-term auditory deprivation can profoundly alter visual processing from the earliest cortical stages. Furthermore, our results provide the first evidence of a co-variation between modified brain activity (cortical plasticity) and behavioural enhancement in this sensory-deprived population

    Corrélats électrophysiologiques de l'intégration des informations auditives et visuelles dans la perception intermodale chez l'homme

    No full text
    Jusqu'à récemment, les recherches sur les bases neurophysiologiques de la perception chez l'homme ont été menées dans les modalités sensorielles séparées. Pourtant, nous sommes quotidiennement confrontés à des objets ou des événements caractérisés par des composantes de plusieurs modalités sensorielles, en particulier auditives et visuelles (par ex., un chien qui aboie). Une question fondamentale est donc de savoir comment le cerveau intègre les éléments séparés d'un objet défini par plusieurs composantes sensorielles pour former un percept unifié. Par l'enregistrement de potentiels évoqués, Giard et Peronnet [J. Cogn Neurosci. 11(5) (1999)] ont montré que l'identification d'objets définis par des composantes auditives et visuelles redondantes induisait des interactions très précoces dans les cortex sensoriels (dès 40 ms après la présentation du stimulus) et dans des régions non-spécifiques. A la suite de cette étude, nous avons examiné par la même approche méthodologique l'influence du contexte expérimental sur l'organisation spatio-temporelle de ces réseaux d'interactions. Nous avons réalisé une série d'expériences utilisant les mêmes objets audio-visuels, en faisant varier tour à tour des paramètres exogènes (contenu informationnel des objets bimodaux et nature de la tâche à effectuer), et endogènes (attention). Les résultats confirment que les mécanismes intégratifs mettent en jeu des réseaux d'aires cérébrales distribuées impliquant à la fois les cortex modalité-spécifiques (auditif et visuel) et des régions non-spécifiques. Ils montrent de plus que ces mécanismes dépendent fortement des paramètres exogènes et endogènes étudiés, ainsi que de l'expertise sensorielle du sujet pour effectuer la tâche. L'ensemble de ces résultats traduit la variabilité des stratégies mises en place par le cerveau pour synthétiser efficacement les informations bimodales issues d'un même objet révélant, ainsi la flexibilité et le caractère hautement adaptatif des processus intégratifs.Until recently, research on the neurophysiological bases of sensory perception has been widely addressed in separate modalities. Yet, most of the objects or events in our daily environment are characterized by components of several sensory (particularly auditory and visual) modalities (e.g. a barking dog). A fundamental question, therefore, is to know how the brain integrates the different sensory features of a same object to form a unitary percept. Using event related potentials, Giard and Peronnet [J. Cogn. Neurosci. 11(5) (1999)] have shown that the identification of objects characterised by redundant auditory and visual features induces cross-modal interactions in sensory-specific cortices (as early as 40 ms in the visual cortex) and in non-specific areas. Following this study, we examined by the same methodological approach the influence of the experimental context on the spatio-temporal organization of these interaction networks. We carried out a series of experiments using the same audio-visual objects, in varying exogenous (informational content of the stimulus, or nature of the task to be performed) and endogenous (attention) parameters. The results confirm that multisensory integration involves distributed brain areas including modality-specific (auditory and visual) cortices and non-specific sites. In addition, they show that the integrative mechanisms strongly depend on the parameters manipulated as well as the sensory skill of the subject for the task required. In their overall, our results provide evidence for the variability of the strategies implemented by the brain to synthesize bimodal information from a same object as efficiently as possible, thereby revealing the exquisite flexibility and highly adaptive character of the integrative processes.LYON2/BRON-BU (690292101) / SudocSudocFranceF

    Interactions audiovisuelles dans le cortex auditif chez l'homme (approches électrophysiologique et comportementale)

    No full text
    Nous avons étudié, par des mesures comportementales et électrophysiologiques (PE), les interactions audiovisuelles (AV) mises en jeu dans deux types de processus essentiellement auditifs : la perception de la parole, et les représentations en mémoire sensorielle auditive (MSA). Concernant la parole, nous avons montré que la vision des mouvements labiaux accélère la discrimination phonologique de syllabes. Cette facilitation comportementale était associée à la fois à une activation des cortex auditifs (surtout secondaires) par les mouvements de lèvres, visible sur les PE intracérébraux chez le patient épileptique, et à une diminution de l'activité auditive entre 50 et 200 ms après le début du son, visible à la fois en intracérébral chez le patient et en PE de surface chez le sujet normal. Une autre étude comportementale a montré qu'une facilitation peut aussi être observée si les mouvements labiaux ne fournissent que des informations temporelles et non phonétiques, mais seulement dans le bruit. Concernant la MSA, on a montré que la détection d un évènement AV rare dans une suite d'évènements standards est plus rapide que la détection d'un évènement auditif ou visuel. Cette facilitation serait liée à des interactions entre les traces mnésiques auditives et visuelles indexées par les MMN auditives et visuelles des PE. Nous avons aussi montré, par l'analyse de la MMN auditive, que la représentation d'un évènement AV en MSA diffère de celle de sa composante auditive seule, mais seulement si ses composantes unimodales sont régulièrement associées. En revanche, nous avons échoué à montrer que la représentation d'une telle régularité peut générer une MMN lorsqu elle est violée.LYON2/BRON-BU (690292101) / SudocSudocFranceF

    Is the auditory sensory memory sensitive to visual information?

    No full text
    The mismatch negativity (MMN) component of auditory event-related brain potentials can be used as a probe to study the representation of sounds in auditory sensory memory (ASM). Yet it has been shown that an auditory MMN can also be elicited by an illusory auditory deviance induced by visual changes. This suggests that some visual information may be encoded in ASM and is accessible to the auditory MMN process. It is not known, however, whether visual information affects ASM representation for any audiovisual event or whether this phenomenon is limited to specific domains in which strong audiovisual illusions occur. To highlight this issue, we have compared the topographies of MMNs elicited by non-speech audiovisual stimuli deviating from audiovisual standards on the visual, the auditory, or both dimensions. Contrary to what occurs with audiovisual illusions, each unimodal deviant elicited sensory-specific MMNs, and the MMN to audiovisual deviants included both sensory components. The visual MMN was, however, different from a genuine visual MMN obtained in a visual-only control oddball paradigm, suggesting that auditory and visual information interacts before the MMN process occurs. Furthermore, the MMN to audiovisual deviants was significantly different from the sum of the two sensory-specific MMNs, showing that the processes of visual and auditory change detection are not completely independent

    Relation Between Level of Prefrontal Activity and Subject’s Performance

    No full text
    International audienc

    Bimodal speech: early suppressive visual effects in human auditory cortex.

    No full text
    While everyone has experienced that seeing lip movements may improve speech perception, little is known about the neural mechanisms by which audiovisual speech information is combined. Event-related potentials (ERPs) were recorded while subjects performed an auditory recognition task among four different natural syllables randomly presented in the auditory (A), visual (V) or congruent bimodal (AV) condition. We found that: (i) bimodal syllables were identified more rapidly than auditory alone stimuli; (ii) this behavioural facilitation was associated with cross-modal [AV-(A+V)] ERP effects around 120-190 ms latency, expressed mainly as a decrease of unimodal N1 generator activities in the auditory cortex. This finding provides evidence for suppressive, speech-specific audiovisual integration mechanisms, which are likely to be related to the dominance of the auditory modality for speech perception. Furthermore, the latency of the effect indicates that integration operates at pre-representational stages of stimulus analysis, probably via feedback projections from visual and/or polymodal areas

    Interactive processing of timbre dimensions: an exploration with event-related potentials

    No full text
    Timbre characterizes the identity of a sound source. On psychoacoustic grounds, it has been described as a multidimensional perceptual attribute of complex sounds. Using Garner's interference paradigm, we found in a previous behavioral study that three timbral dimensions exhibited interactive processing. These timbral dimensions acoustically corresponded to attack time, spectral centroid, and spectrum fine structure. Here, using event-related potentials (ERPs), we sought neurophysiological correlates of the interactive processing of these dimensions of timbre. ERPs allowed us to dissociate several levels of interaction, at both early perceptual and late stimulus identification stages of processing. The cost of filtering out an irrelevant timbral dimension was accompanied by a late negative-going activity, whereas congruency effects between timbre dimensions were associated with interactions in both early sensory and late processing stages. ERPs also helped to determine the similarities and differences in the interactions displayed by the different pairs of timbre dimensions, revealing in particular variations in the latencies at which temporal and spectral timbre dimensions can interfere with the processing of another spectral timbre dimension
    corecore