572 research outputs found

    Multimodality in VR: A survey

    Get PDF
    Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we review the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer

    Perceptual Strategies and Neuronal Underpinnings underlying Pattern Recognition through Visual and Tactile Sensory Modalities in Rats

    Get PDF
    The aim of my PhD project was to investigate multisensory perception and multimodal recognition abilities in the rat, to better understand the underlying perceptual strategies and neuronal mechanisms. I have chosen to carry out this project on the laboratory rat, for two reasons. First, the rat is a flexible and highly accessible experimental model, where it is possible to combine state-of-the-art neurophysiological approaches (such as multi-electrode neuronal recordings) with behavioral investigation of perception and (more in general) cognition. Second, extensive research concerning multimodal integration has already been conducted in this species, both at the neurophysiological and behavioral level. My thesis work has been organized in two projects: a psychophysical assessment of object categorization abilities in rats, and a neurophysiological study of neuronal tuning in the primary visual cortex of anaesthetized rats. In both experiments, unisensory (visual and tactile) and multisensory (visuo-tactile) stimulation has been used for training and testing, depending on the task. The first project has required development of a new experimental rig for the study of object categorization in rat, using solid objects, so as to be able to assess their recognition abilities under different modalities: vision, touch and both together. The second project involved an electrophysiological study of rat primary visual cortex, during visual, tactile and visuo-tactile stimulation, with the aim of understanding whether any interaction between these modalities exists, in an area that is mainly deputed to one of them. The results of both of the studies are still preliminary, but they already offer some interesting insights on the defining features of these abilities

    Multimodality in {VR}: {A} Survey

    Get PDF
    Virtual reality has the potential to change the way we create and consume content in our everyday life. Entertainment, training, design and manufacturing, communication, or advertising are all applications that already benefit from this new medium reaching consumer level. VR is inherently different from traditional media: it offers a more immersive experience, and has the ability to elicit a sense of presence through the place and plausibility illusions. It also gives the user unprecedented capabilities to explore their environment, in contrast with traditional media. In VR, like in the real world, users integrate the multimodal sensory information they receive to create a unified perception of the virtual world. Therefore, the sensory cues that are available in a virtual environment can be leveraged to enhance the final experience. This may include increasing realism, or the sense of presence; predicting or guiding the attention of the user through the experience; or increasing their performance if the experience involves the completion of certain tasks. In this state-of-the-art report, we survey the body of work addressing multimodality in virtual reality, its role and benefits in the final user experience. The works here reviewed thus encompass several fields of research, including computer graphics, human computer interaction, or psychology and perception. Additionally, we give an overview of different applications that leverage multimodal input in areas such as medicine, training and education, or entertainment; we include works in which the integration of multiple sensory information yields significant improvements, demonstrating how multimodality can play a fundamental role in the way VR systems are designed, and VR experiences created and consumed

    Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    Get PDF

    The role of visual experience in the emergence of cross-modal correspondences

    Get PDF
    Cross-modal correspondences describe the widespread tendency for attributes in one sensory modality to be consistently matched to those in another modality. For example, high pitched sounds tend to be matched to spiky shapes, small sizes, and high elevations. However, the extent to which these correspondences depend on sensory experience (e.g. regularities in the perceived environment) remains controversial. Two recent studies involving blind participants have argued that visual experience is necessary for the emergence of correspondences, wherein such correspondences were present (although attenuated) in late blind individuals but absent in the early blind. Here, using a similar approach and a large sample of early and late blind participants (N=59) and sighted controls (N=63), we challenge this view. Examining five auditory-tactile correspondences, we show that only one requires visual experience to emerge (pitch-shape), two are independent of visual experience (pitch-size, pitch-weight), and two appear to emerge in response to blindness (pitch-texture, pitch-softness). These effects tended to be more pronounced in the early blind than late blind group, and the duration of vision loss among the late blind did not mediate the strength of these correspondences. Our results suggest that altered sensory input can affect cross-modal correspondences in a more complex manner than previously thought and cannot solely be explained by a reduction in visually-mediated environmental correlations. We propose roles of visual calibration, neuroplasticity and structurally-innate associations in accounting for our findings
    corecore