34 research outputs found

    Rapid detection of snakes modulates spatial orienting in infancy

    Get PDF
    Recent evidence for an evolved fear module in the brain comes from studies showing that adults, children and infants detect evolutionarily threatening stimuli such as snakes faster than non-threatening ones. A decisive argument for a threat detection system efficient early in life would come from data showing, in young infants, a functional threat-detection mechanism in terms of “what” and “where” visual pathways. The present study used a variant of Posner’s cuing paradigm, adapted to 7–11-month-olds. On each trial, a threat-irrelevant or a threat-relevant cue was presented (a flower or a snake, i.e., “what”). We measured how fast infants detected these cues and the extent to which they further influenced the spatial allocation of attention (“where”). In line with previous findings, we observed that infants oriented faster towards snake than flower cues. Importantly, a facilitation effect was found at the cued location for flowers but not for snakes, suggesting that these latter cues elicit a broadening of attention and arguing in favour of sophisticated “what–where” connections. These results strongly support the claim that humans have an early propensity to detect evolutionarily threat-relevant stimuli

    Perception de la langue française parlée complétée: intégration du trio lèvres-main-son

    No full text
    La Langue française Parlée Complétée est un système peu connu du grand public. Adapté du Cued Speech en 1977, il a pour ambition d’aider les sourds francophones à percevoir un message oral en complétant les informations fournies par la lecture labiale à l’aide d’un geste manuel. Si, depuis sa création, la LPC a fait l’objet de nombreuses recherches scientifiques, peu de chercheurs ont, jusqu’à présent, étudié les processus mis en jeu dans la perception de la parole codée. Or, par la présence conjointe d’indices visuels (liés aux lèvres et à la main) et d’indices auditifs (via les prothèses auditives ou l’implant cochléaire), l’étude de la LPC offre un cadre idéal pour les recherches sur l’intégration multimodale dans le traitement de la parole. En effet, on sait aujourd’hui que sourds comme normo-entendants mettent à contribution l’ouïe et la vue pour percevoir la parole, un phénomène appelé intégration audio-visuelle (AV).Dans le cadre de cette thèse nous avons cherché à objectiver et caractériser l’intégration labio-manuelle dans la perception de la parole codée. Le poids accordé par le système perceptif aux informations manuelles, d’une part, et aux informations labiales, d’autre part, dépend-il de la qualité de chacune d’entre elles ?Varie-t-il en fonction du statut auditif ?Quand l’information auditive est disponible, comment le traitement de l’information manuelle est-il incorporé au traitement audio-visuel ?Pour tenter de répondre à cette série de questions, cinq paradigmes expérimentaux ont été créés et administrés à des adultes sourds et normo-entendants décodant la LPC. Les trois premières études étaient focalisées sur la perception de la parole codée sans informations auditives. Dans l’étude n° 1, le but était d’objectiver l’intégration labio-manuelle ;l’impact de la qualité des informations labiales et du statut auditif sur cette intégration a également été investigué. L’objectif de l’étude n° 2 était d’examiner l’impact conjoint de la qualité des informations manuelles et labiales ;nous avons également comparé des décodeurs normo-entendants à des décodeurs sourds. Enfin, dans l’étude n° 3, nous avons examiné, chez des décodeurs normo-entendants et sourds, l’effet de l’incongruence entre les informations labiales et manuelles sur la perception de mots. Les deux dernières études étaient focalisées sur la perception de la parole codée avec du son. L’objectif de l’étude n°4 était de comparer l’impact de la LPC sur l’intégration AV entre les sourds et les normo-entendants. Enfin, dans l’étude n°5, nous avons comparé l’impact de la LPC chez des décodeurs sourds présentant une récupération auditive faible ou forte. Nos résultats ont permis de confirmer le véritable ancrage du code LPC sur la parole et de montrer que le poids de chaque information au sein du processus d’intégration est dépendant notamment de la qualité du stimulus manuel, de la qualité du stimulus labial et du niveau de performance auditive.Doctorat en Sciences Psychologiques et de l'éducationinfo:eu-repo/semantics/nonPublishe

    Integration of auditory, labial and manual signals in cued speech perception by deaf adults : an adaptation of the McGurk paradigm

    No full text
    International audienceAmong deaf individuals fitted with a cochlear implant, some use Cued Speech (CS; a system in which each syllable is uttered with a complementary manual gesture) and therefore, have to combine auditory, labial and manual information to perceive speech. We examined how audio-visual (AV) speech integration is affected by the presence of manual cues and on which form of information (auditory, labial or manual) the CS receptors primarily rely depending on labial ambiguity. To address this issue, deaf CS users (N=36) and deaf CS naĂŻve (N=35) participants were submitted to an identification task of two AV McGurk stimuli (either with a plosive or with a fricative consonant). Manual cues were congruent with either auditory information, lip information or the expected fusion. Results revealed that deaf individuals can merge audio and labial information into a single unified percept. Without manual cues, participants gave a high proportion of fusion response (particularly with ambiguous plosive McGurk stimuli). Results also suggested that manual cues can modify the AV integration and that their impact differs between plosive and fricative McGurk stimuli

    How is the McGurk effect modulated by Cued Speech in deaf and hearing adults ?

    Get PDF
    Speech perception for both hearing and deaf people involves an integrative process between auditory and lip-reading information. In order to disambiguate information from lips, manual cues from Cued Speech may be added. Cued Speech (CS) is a system of manual aids developed to help deaf people to clearly and completely understand speech visually (Cornett, 1967). Within this system, both labial and manual information, as lone input sources, remain ambiguous. Perceivers, therefore, have to combine both types of information in order to get one coherent percept. In this study, we examined how audio-visual (AV) integration is affected by the presence of manual cues and on which form of information (auditory, labial or manual) the CS receptors primarily rely. To address this issue, we designed a unique experiment that implemented the use of AV McGurk stimuli (audio /pa/ and lip-reading /ka/) which were produced with or without manual cues. The manual cue was congruent with either auditory information, lip information or the expected fusion. Participants were asked to repeat the perceived syllable aloud. Their responses were then classified into four categories: audio (when the response was /pa/), lip-reading (when the response was /ka/), fusion (when the response was /ta/) and other (when the response was something other than /pa/, /ka/ or /ta/). Data were collected from hearing impaired individuals who were experts in CS (all of which had either cochlear implants or binaural hearing aids; N = 8), hearing-individuals who were experts in CS (N = 14) and hearing-individuals who were completely naĂŻve of CS (N = 15). Results confirmed that, like hearing-people, deaf people can merge auditory and lip-reading information into a single unified percept. Without manual cues, McGurk stimuli induced the same percentage of fusion responses in both groups. Results also suggest that manual cues can modify the AV integration and that their impact differs between hearing and deaf people.SCOPUS: ar.jinfo:eu-repo/semantics/publishe

    Impact of Cued Speech on audio-visual speech integration in deaf and hearing adults

    No full text
    For hearing and deaf people, speech perception involves an integrative process between auditory and lip read information. In order to disambiguate information from lips, manual cue may be added (Cued Speech). We examined how audio-visual integration is affected by the presence of manual cues. To address this issue, we designed an original experiment using audio-visual McGurk stimuli produced with manual cues. The manual cue was either congruent with auditory information, lip information or with the expected fusion. Our results suggest that manual cues can modify the audio-visual integration, and that their impact depends on auditory status.info:eu-repo/semantics/publishe

    Modulation of attentional orienting by threat-relevant stimuli in infancy

    No full text
    info:eu-repo/semantics/nonPublishe

    Electrophysiological study of integration process in French Cued Speech

    No full text
    info:eu-repo/semantics/nonPublishe

    Spatial attentional biases to threat-relevant stimuli in infancy

    No full text
    info:eu-repo/semantics/nonPublishe
    corecore