18 research outputs found

    Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Get PDF
    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV − V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40–60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing

    Suppression of the auditory N1 by visual anticipatory motion is modulated by temporal and identity predictability

    Get PDF
    The amplitude of the auditory N1 component of the event-related potential (ERP) is typically suppressed when a sound is accompanied by visual anticipatory information that reliably predicts the timing and identity of the sound. While this visually-induced suppression of the auditory N1 is considered an early electrophysiological marker of fulfilled prediction, it is not yet fully understood whether this internal predictive coding mechanism is primarily driven by the temporal characteristics, or by the identity features of the anticipated sound. The current study examined the impact of temporal and identity predictability on suppression of the auditory N1 by visual anticipatory motion with an ecologically valid audiovisual event (a video of a handclap). Predictability of auditory timing and identity was manipulated in three different conditions in which sounds were either played in isolation, or in conjunction with a video that either reliably predicted the timing of the sound, the identity of the sound, or both the timing and identity. The results showed that N1 suppression was largest when the video reliably predicted both the timing and identity of the sound, and reduced when either the timing or identity of the sound was unpredictable. The current results indicate that predictions of timing and identity are both essential elements for predictive coding in audition

    Naso-temporal asymmetry in the N170 for processing faces in normal viewers but not in developmental prosopagnosia

    Get PDF
    Abstract Some elementary aspects of faces can be processed before cortical maturation or after lesion of primary visual cortex. Recent findings suggesting a role of an evolutionary ancient visual system in face processing have exploited the relative advantage of the temporal hemifield (nasal hemiretina). Here, we investigated whether under some circumstances face processing also shows a temporal hemifield advantage. We measured the face sensitive N170 to laterally presented faces viewed passively under monocular conditions and compared face recognition in the temporal and nasal hemiretina. A N170 response for upright faces was observed which was larger for projections to the nasal hemiretina/temporal hemifields. This pattern was not observed in a developmental prosopagnosic. These results point to the importance of the early stages of face processing for normal face recognition abilities and suggest a potentially important factor in the origins of developmental prosopagnosia. © 2004 Elsevier Ireland Ltd. All rights reserved. Keywords: N170; Prosopagnosia; Naso-temporal asymmetry; Non-LGN based vision; Subcortical visual processing Research on human visual abilities through normal lifespan and in brain damage draws attention to visual abilities of the brain that are not based on pathways critically involving latero-geniculate nucleus (LGN). Findings of several studies Evidence for visual abilities not based on LGN-cortical pathways has also been obtained in a very different population, patients with hemineglect [21] and patients with complete unilateral lesion of striate cortex who show residual visio

    Auditory grouping occurs prior to intersensory pairing: evidence from temporal ventriloquism

    Get PDF
    The authors examined how principles of auditory grouping relate to intersensory pairing. Two sounds that normally enhance sensitivity on a visual temporal order judgement task (i.e. temporal ventriloquism) were embedded in a sequence of flanker sounds which either had the same or different frequency (Exp. 1), rhythm (Exp. 2), or location (Exp. 3). In all experiments, we found that temporal ventriloquism only occurred when the two capture sounds differed from the flankers, demonstrating that grouping of the sounds in the auditory stream took priority over intersensory pairing. By combining principles of auditory grouping with intersensory pairing, we demonstrate that capture sounds were, counter-intuitively, more effective when their locations differed from that of the lights rather than when they came from the same position as the lights

    Neural correlates of audiovisual motion capture

    Get PDF
    Visual motion can affect the perceived direction of auditory motion (i.e., audiovisual motion capture). It is debated, though, whether this effect occurs at perceptual or decisional stages. Here, we examined the neural consequences of audiovisual motion capture using the mismatch negativity (MMN), an event-related brain potential reflecting pre-attentive auditory deviance detection. In an auditory-only condition occasional changes in the direction of a moving sound (deviant) elicited an MMN starting around 150 ms. In an audiovisual condition, auditory standards and deviants were synchronized with a visual stimulus that moved in the same direction as the auditory standards. These audiovisual deviants did not evoke an MMN, indicating that visual motion reduced the perceptual difference between sound motion of standards and deviants. The inhibition of the MMN by visual motion provides evidence that auditory and visual motion signals are integrated at early sensory processing stages

    Comment on “Differential Effects of the Temporal and Spatial Distribution of Audiovisual Stimuli on Cross‐Modal Spatial Recalibration”

    No full text
    Bruns et al. (2020) provide new research that suggests that the ventriloquism after‐effect (VAE: an enduring shift of the perceived location of a sound toward a previously seen visual stimulus) and multisensory enhancement (ME: an improvement in the precision of sound localization) may dissociate depending on the rate at which exposure stimuli are presented. They reported that the VAE, but not the ME, was diminished when exposure stimuli were presented at 10 Hz rather than at 2 Hz. To the authors, this suggested that different neural structures underlie the VAE and ME. In our view, however, this needs to be tested more extensively because alternative and simpler explanations have not yet been checked
    corecore