35 research outputs found

    Effects of stimulus duration on audio-visual synchrony perception

    Get PDF
    The integration of visual and auditory inputs in the human brain occurs only if the components are perceived in temporal proximity, that is, when the intermodal time difference falls within the so-called subjective synchrony range. We used the midpoint of this range to estimate the point of subjective simultaneity (PSS). We measured the PSS for audio-visual (AV) stimuli in a synchrony judgment task, in which subjects had to judge a given AV stimulus using three response categories (audio first, synchronous, video first). The relevant stimulus manipulation was the duration of the auditory and visual components. Results for unimodal auditory and visual stimuli have shown that the perceived onset shifts to relatively later positions with increasing stimulus duration. These unimodal shifts should be reflected in changing PSS values, when AV stimuli with different durations of the auditory and visual components are used. The results for 17 subjects showed indeed a significant shift of the PSS for different duration combinations of the stimulus components. Because the shifts were approximately equal for duration changes in either of the components, no net shift of the PSS was observed as long as the durations of the two components were equal. This result indicates the need to appropriately account for unimodal timing effects when quantifying intermodal synchrony perceptio

    Audiovisual Segregation in Cochlear Implant Users

    Get PDF
    It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition), as well as in normal controls. A visual speech recognition task (i.e. speechreading) was administered either in silence or in combination with three types of auditory distractors: i) noise ii) reverse speech sound and iii) non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users

    Multisensory Integration and Attention in Autism Spectrum Disorder: Evidence from Event-Related Potentials

    Get PDF
    Successful integration of various simultaneously perceived perceptual signals is crucial for social behavior. Recent findings indicate that this multisensory integration (MSI) can be modulated by attention. Theories of Autism Spectrum Disorders (ASDs) suggest that MSI is affected in this population while it remains unclear to what extent this is related to impairments in attentional capacity. In the present study Event-related potentials (ERPs) following emotionally congruent and incongruent face-voice pairs were measured in 23 high-functioning, adult ASD individuals and 24 age- and IQ-matched controls. MSI was studied while the attention of the participants was manipulated. ERPs were measured at typical auditory and visual processing peaks, namely, P2 and N170. While controls showed MSI during divided attention and easy selective attention tasks, individuals with ASD showed MSI during easy selective attention tasks only. It was concluded that individuals with ASD are able to process multisensory emotional stimuli, but this is differently modulated by attention mechanisms in these participants, especially those associated with divided attention. This atypical interaction between attention and MSI is also relevant to treatment strategies, with training of multisensory attentional control possibly being more beneficial than conventional sensory integration therapy

    Sensory information in perceptual-motor sequence learning: visual and/or tactile stimuli

    Get PDF
    Sequence learning in serial reaction time (SRT) tasks has been investigated mostly with unimodal stimulus presentation. This approach disregards the possibility that sequence acquisition may be guided by multiple sources of sensory information simultaneously. In the current study we trained participants in a SRT task with visual only, tactile only, or bimodal (visual and tactile) stimulus presentation. Sequence performance for the bimodal and visual only training groups was similar, while both performed better than the tactile only training group. In a subsequent transfer phase, participants from all three training groups were tested in conditions with visual, tactile, and bimodal stimulus presentation. Sequence performance between the visual only and bimodal training groups again was highly similar across these identical stimulus conditions, indicating that the addition of tactile stimuli did not benefit the bimodal training group. Additionally, comparing across identical stimulus conditions in the transfer phase showed that the lesser sequence performance from the tactile only group during training probably did not reflect a difference in sequence learning but rather just a difference in expression of the sequence knowledge

    Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli

    Get PDF
    The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established

    The COGs (context, object, and goals) in multisensory processing

    Get PDF
    Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and “top-down” control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer’s goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications

    Spatial tuning of tactile attention modulates visual processing within hemifields: an ERP investigation of crossmodal attention

    No full text
    Recent event-related brain potential (ERP) studies have revealed crossmodal links in spatial attention, but have not yet investigated differences in the spatial tuning of attention between task-relevant and irrelevant modalities. We studied the spatial distribution of attention in vision under conditions where participants were instructed to attend to the left or right-hand in order to detect infrequent targets, and to entirely ignore visual stimuli presented via LEDs at two eccentricities in the left or right hemifield. Hands were located close to two of these four LEDs in different blocks. Visual N1 amplitudes were enhanced when visual stimuli in the cued hemifield were close to the attended hand, relative to visual stimuli presented at the other location on the same side. These within-hemifield attentional modulations of visual processing demonstrate that crossmodal attention is not distributed diffusely across an entire hemifield. The spatial tuning of tactile attention transfers crossmodally to affect vision, consistent with spatial selection at a multimodal level of representation
    corecore