46 research outputs found

    Atypical audiovisual speech integration in infants at risk for autism

    Get PDF
    The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/− audio/ba/and the congruent visual/ba/− audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/− audio/ga/display compared with the congruent visual/ga/− audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16) = 17.153, p = 0.001). The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25) = 0.09, p = 0.767), in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41) = 4.466, p = 0.041). In some cases this reduced ability might lead to the poor communication skills characteristic of autism

    Image Quality Assessment Using Spatial Frequency Component

    Full text link
    Image quality assessment (IQA) is a crucial technique in perceptual image/video coding, because it is not only a ruler for performance evaluation of coding algorithms but also a metric for ratio-distortion optimization in coding. In this paper, inspired by the fact that distortions of both global and local information influence the perceptual image quality, we propose a novel IQA method that inspects these information in the spatial frequency components of the image. The distortion of the global information mostly existing in low spatial frequency is measured by a rectified mean absolute difference metric, and the distortion of the local information mostly existing in high spatial frequency is measured by SSIM. These two measurements are combined using a newly proposed abruptness weighting that describes the uniformity of the residual image. Experimental results on LIVE database show that the proposed metric outperforms the SSIM and achieves performance competitive with the state-of-the-art metrics. ? 2009 Springer-Verlag Berlin Heidelberg.EI

    Speech and Non-Speech Audio-Visual Illusions: A Developmental Study

    Get PDF
    It is well known that simultaneous presentation of incongruent audio and visual stimuli can lead to illusory percepts. Recent data suggest that distinct processes underlie non-specific intersensory speech as opposed to non-speech perception. However, the development of both speech and non-speech intersensory perception across childhood and adolescence remains poorly defined. Thirty-eight observers aged 5 to 19 were tested on the McGurk effect (an audio-visual illusion involving speech), the Illusory Flash effect and the Fusion effect (two audio-visual illusions not involving speech) to investigate the development of audio-visual interactions and contrast speech vs. non-speech developmental patterns. Whereas the strength of audio-visual speech illusions varied as a direct function of maturational level, performance on non-speech illusory tasks appeared to be homogeneous across all ages. These data support the existence of independent maturational processes underlying speech and non-speech audio-visual illusory effects

    School-aged children can benefit from audiovisual semantic congruency during memory encoding

    Get PDF
    Although we live in a multisensory world, children's memory has been usually studied concentrating on only one sensory modality at a time. In this study, we investigated how audiovisual encoding affects recognition memory. Children (n = 114) from three age groups (8, 10 and 12 years) memorized auditory or visual stimuli presented with a semantically congruent, incongruent or non-semantic stimulus in the other modality during encoding. Subsequent recognition memory performance was better for auditory or visual stimuli initially presented together with a semantically congruent stimulus in the other modality than for stimuli accompanied by a non-semantic stimulus in the other modality. This congruency effect was observed for pictures presented with sounds, for sounds presented with pictures, for spoken words presented with pictures and for written words presented with spoken words. The present results show that semantically congruent multisensory experiences during encoding can improve memory performance in school-aged children.Peer reviewe

    Sound location can influence audiovisual speech perception when spatial attention is manipulated.

    No full text
    Audiovisual speech perception has been considered to operate independent of sound location, since the McGurk effect (altered auditory speech perception caused by conflicting visual speech) has been shown to be unaffected by whether speech sounds are presented in the same or different location as a talking face. Here we show that sound location effects arise with manipulation of spatial attention. Sounds were presented from loudspeakers in five locations: the centre (location of the talking face) and 45°/90° to the left/right. Auditory spatial attention was focused on a location by presenting the majority (90%) of sounds from this location. In Experiment 1, the majority of sounds emanated from the centre, and the McGurk effect was enhanced there. In Experiment 2, the major location was 90° to the left, causing the McGurk effect to be stronger on the left and centre than on the right. Under control conditions, when sounds were presented with equal probability from all locations, the McGurk effect tended to be stronger for sounds emanating from the centre, but this tendency was not reliable. Additionally, reaction times were the shortest for a congruent audiovisual stimulus, and this was the case independent of location. Our main finding is that sound location can modulate audiovisual speech perception, and that spatial attention plays a role in this modulation

    Processing of changes in visual speech in the human auditory cortex.

    No full text
    Seeing a talker's articulatory gestures may affect the observer's auditory speech percept. Observing congruent articulatory gestures may enhance the recognition of speech sounds [J. Acoust. Soc. Am. 26 (1954) 212], whereas observing incongruent gestures may change the auditory percept phonetically as occurs in the McGurk effect [Nature 264 (1976) 746]. For example, simultaneous acoustic /ba/ and visual /ga/ are usually heard as /da/. We studied cortical processing of occasional changes in audiovisual and visual speech stimuli with magnetoencephalography. In the audiovisual experiment congruent (acoustic /iti/, visual /iti/) and incongruent (acoustic /ipi/, visual /iti/) audiovisual stimuli, which were both perceived as /iti/, were presented among congruent /ipi/ (acoustic /ipi/, visual /ipi/) stimuli. In the visual experiment only the visual components of these stimuli were presented. A visual change both in audiovisual and visual experiments activated supratemporal auditory cortices bilaterally. The auditory cortex activation to a visual change occurred later in the visual than in the audiovisual experiment, suggesting that interaction between modalities accelerates the detection of visual change in speech
    corecore