4 research outputs found

    Audiovisual Integration Varies with Target and Environment Richness in Immersive Virtual Reality: Supplementary Material

    No full text
    <p>We are continually bombarded by information arriving to each of our senses; however, the brain seems to effortlessly integrate this separate information into a unified percept. Although multisensory integration has been researched extensively using simple computer tasks and stimuli, much less is known about how multisensory integration functions in real-world contexts. Additionally, several recent studies have demonstrated that multisensory integration varies tremendously across naturalistic stimuli. Virtual reality can be used to study multisensory integration in realistic settings because it combines realism with precise control over the environment and stimulus presentation. In the current study, we investigated whether multisensory integration as measured by the redundant signals effects (RSE) is observable in naturalistic environments using virtual reality and whether it differs as a function of target and/or environment cue-richness. Participants detected auditory, visual, and audiovisual targets which varied in cue-richness within three distinct virtual worlds that also varied in cue-richness. We demonstrated integrative effects in each environment-by-target pairing and further showed a modest effect on multisensory integration as a function of target cue-richness but only in the cue-rich environment. Our study is the first to definitively show that minimal and more naturalistic tasks elicit comparable redundant signals effects. Our results also suggest that multisensory integration may function differently depending on the features of the environment. The results of this study have important implications in the design of virtual multisensory environments that are currently being used for training, educational, and entertainment purposes.</p

    Features of the psychometric function.

    No full text
    <p>Individual participant data was fit with a psychometric function for each perceptual load. The resulting mean PSS (A), nJND (B), and pJND (C) are shown grouped by the modality of the distractor task. Both the nJND and pJND, but not the PSS, increased with increasing load. No significant effects of distractor modality were found. Error bars represent SEM. * Indicate significant differences (p < .0125) as compared to NL.</p

    Percent flash first reports across SOA for the CTOJ task separated by visual versus auditory distractor tasks.

    No full text
    <p>SOA significantly influenced the percent of flash-first reports with positive SOAs (visual leading) resulting in more visual first reports. SOA and perceptual load significantly interacted for both distractor modalities indicating that perceptual load modulates performance on the CTOJ task. Error bars represent the SEM. * indicate significant differences between NL and HL and/or NL and LL at the Bonferroni-corrected alpha level of p < .0018.</p

    Performance on the visual and auditory distractor tasks.

    No full text
    <p>Accuracy was lower for HL compared to LL for both visual and auditory distractors. Additionally, accuracy was higher for the visual distractor task then the auditory distractor task. Error bars represent SEM. * Indicate significance differences between LL and HL.</p
    corecore