51 research outputs found

    Auditory-visual virtual reality as a diagnostic and therapeutic tool for cynophobia

    Get PDF
    International audienceTraditionally, virtual reality exposure-based treatment concentrates primarily on the presentation of a high fidelity visual experience. However, adequately combining the visual and the auditory experience provides a powerful tool to enhance sensory processing and modulate attention. We present the design and usability testing of an auditory-visual interactive environment for investigating virtual reality exposurebased treatment for cynophobia. The specificity of our application is that it involves 3D sound, allowing the presentations and spatial manipulation of a fearful stimulus in the auditory modality, in the visual modality, and both. We conducted an evaluation test with 10 dog fearful participants in order to assess the capacity of our auditory-visual virtual environment to generate fear reactions. The specific perceptual characteristics of the dog model that were implemented in the virtual environment were highly arousing, suggesting that virtual reality is a promising tool to treat cynophobia

    The Spatial Release of Cognitive Load in Cocktail Party Is Determined by the Relative Levels of the Talkers

    Get PDF
    In a multi-talker situation, spatial separation between talkers reduces cognitive processing load: this is the “spatial release of cognitive load”. The present study investigated the role played by the relative levels of the talkers on this spatial release of cognitive load. During the experiment, participants had to report the speech emitted by a target talker in the presence of a concurrent masker talker. The spatial separation (0° and 120° angular distance in azimuth) and the relative levels of the talkers (adverse, intermediate, and favorable target-to-masker ratio) were manipulated. The cognitive load was assessed with a prefrontal functional near-infrared spectroscopy. Data from 14 young normal- hearing listeners revealed that the target-to-masker ratio had a direct impact on the spatial release of cognitive load. Spatial separation significantly reduced the prefrontal activity only for the intermediate target-to-masker ratio and had no effect on prefrontal activity for the favorable and the adverse target-to-masker ratios. Therefore, the relative levels of the talkers might be a key point to determine the spatial release of cognitive load and more specifically the prefrontal activity induced by spatial cues in multi- talker environments

    Bimodal perception of audio-visual material properties for virtual environments

    Get PDF
    International audienceHigh-quality rendering of both audio and visual material properties is very important in interac- tive virtual environments, since convicingly rendered materials increase realism and the sense of immersion. We studied how the level of detail of auditory and visual stimuli interact in the per- ception of audio-visual material rendering quality. Our study is based on perception of material discrimination, when varying the levels of detail of modal synthesis for sound, and Bidirectional Re ectance Distribution Functions for graphics. We performed an experiment for two di erent models (a Dragon and a Bunny model) and two material types (Plastic and Gold). The results show a signi cant interaction between auditory and visual level of detail in the perception of material similarity, when comparing approximate levels of detail to a high-quality audio-visual reference rendering. We show how this result can contribute to signi cant savings in computation time in an interactive audio-visual rendering system. To our knowledge this is the rst study which shows interaction of audio and graphics representation in a material perception task

    Bimodal perception of audio-visual material properties for virtual environments

    Get PDF
    INRIA Research Report 6687High-quality rendering of both audio and visual material properties is very important in interactive virtual environments, since convincingly rendered materials increase realism and the sense of immersion. We studied how the level of detail of auditory and visual stimuli interact in the perception of audio-visual material rendering quality. Our study is based on perception of material discrimination, when varying the levels of detail of modal synthesis for sound, and Bidirectional Reflectance Distribution Functions for graphics. We performed an experiment for two different models (a Dragon and a Bunny model) and two material types (Plastic and Gold). The results show a significant interaction between auditory and visual level of detail in the perception of material similarity, when comparing approximate levels of detail to a high-quality audio-visual reference rendering. We show how this result can contribute to significant savings in computation time in an interactive audio-visual rendering system. To our knowledge this is the first study which shows interaction of audio and graphics representation in a material perception task

    Auditory-visual virtual reality as a diagnostic and therapeutic tool for cynophobia

    Get PDF
    International audienceTraditionally, virtual reality exposure-based treatment concentrates primarily on the presentation of a high fidelity visual experience. However, adequately combining the visual and the auditory experience provides a powerful tool to enhance sensory processing and modulate attention. We present the design and usability testing of an auditory-visual interactive environment for investigating virtual reality exposurebased treatment for cynophobia. The specificity of our application is that it involves 3D sound, allowing the presentations and spatial manipulation of a fearful stimulus in the auditory modality, in the visual modality, and both. We conducted an evaluation test with 10 dog fearful participants in order to assess the capacity of our auditory-visual virtual environment to generate fear reactions. The specific perceptual characteristics of the dog model that were implemented in the virtual environment were highly arousing, suggesting that virtual reality is a promising tool to treat cynophobia

    Auditory-visual virtual environments to treat dog phobia

    Get PDF
    Session III - Virtual Reality Methodologies IInternational audienceIn this paper we present the design, development, and usability testing of an auditory-visual based interactive environment for investigating virtual reality exposure-based treatment for cynophobia. The application is developed upon a framework that integrates different algorithms of the CROSSMOD project (www.crossmod.org). We discuss the on-going work and preliminary observations, so as to further the development of auditory-visual environment for virtual reality. Traditionally, virtual reality concentrates primarily on the presentation of high fidelity visual experience. We aim at demonstrating that combining adequately the visual and the auditory experience provides a powerful tool to enhance sensory processing and modulate attention

    The role of object categories in auditory-visual object recognition

    No full text
    cote interne IRCAM: Suied08fNone / NoneNational audienceThe influence of semantic congruence on auditory-visual object recognition was studied in a go/no-go task. We compared the effect of different object categories (animals and man-made objects) on reaction times. Experiments were run under a realistic virtual environment including 3D images and free-field audio. Participants were asked to react as fast as possible to a target object presented in the visual and/or the auditory modality, and to inhibit their response to a distractor object. Reaction times were significantly shorter for semantically congruent bimodal stimuli than would be predicted by independent processing of information about the auditory and the visual targets presented unimodally. Moreover, reaction times were significantly shorter for semantically congruent bimodal stimuli (i.e., visual and auditory targets) than for semantically incongruent bimodal stimuli (i.e. target represented in only one sensory modality and distractor presented in the other modality). A comparison of the interference effect across the various object different categories is then detailed. These experiments bring new elements about the influence of object categories on the rules of auditory-visual integration

    Evaluating warning sound urgency with reaction times.

    No full text
    corecore