4 research outputs found

    Exploring virtual reality object perception following sensory-motor interactions with different visuo-haptic collider properties.

    Get PDF
    Interacting with the environment often requires the integration of visual and haptic information. Notably, perceiving external objects depends on how our brain binds sensory inputs into a unitary experience. The feedback provided by objects when we interact (through our movements) with them might then influence our perception. In VR, the interaction with an object can be dissociated by the size of the object itself by means of 'colliders' (interactive spaces surrounding the objects). The present study investigates possible after-effects in size discrimination for virtual objects after exposure to a prolonged interaction characterized by visual and haptic incongruencies. A total of 96 participants participated in this virtual reality study. Participants were distributed into four groups, in which they were required to perform a size discrimination task between two cubes before and after 15 min of a visuomotor task involving the interaction with the same virtual cubes. Each group interacted with a different cube where the visual (normal vs. small collider) and the virtual cube's haptic (vibration vs. no vibration) features were manipulated. The quality of interaction (number of touches and trials performed) was used as a dependent variable to investigate the performance in the visuomotor task. To measure bias in size perception, we compared changes in point of subjective equality (PSE) before and after the task in the four groups. The results showed that a small visual collider decreased manipulation performance, regardless of the presence or not of the haptic signal. However, change in PSE was found only in the group exposed to the small visual collider with haptic feedback, leading to increased perception of the cube size. This after-effect was absent in the only visual incongruency condition, suggesting that haptic information and multisensory integration played a crucial role in inducing perceptual changes. The results are discussed considering the recent findings in visual-haptic integration during multisensory information processing in real and virtual environments

    Spatial tactile localization depends on sensorimotor binding: preliminary evidence from virtual reality.

    No full text
    Our brain continuously maps our body in space. It has been suggested that at least two main frames of reference are used to process somatosensory stimuli presented on our own body: the anatomical frame of reference (based on the somatotopic representation of our body in the somatosensory cortex) and the spatial frame of reference (where body parts are mapped in external space). Interestingly, a mismatch between somatotopic and spatial information significantly affects the processing of bodily information, as demonstrated by the "crossing hand" effect. However, it is not clear if this impairment occurs not only when the conflict between these frames of reference is determined by a static change in the body position (e.g., by crossing the hands) but also when new associations between motor and sensory responses are artificially created (e.g., by presenting feedback stimuli on a side of the body that is not involved in the movement). In the present study, 16 participants performed a temporal order judgment task before and after a congruent or incongruent visual-tactile-motor- task in virtual reality. During the VR task, participants had to move a cube using a virtual stick. In the congruent condition, the haptic feedback during the interaction with the cube was provided on the right hand (the one used to control the stick). In the incongruent condition, the haptic feedback was provided to the contralateral hand, simulating a sort of 'active' crossed feedback during the interaction. Using a psychophysical approach, the point of subjective equality (or PSE, i.e., the probability of responding left or right to the first stimulus in the sequence in 50% of the cases) and the JND (accuracy) were calculated for both conditions, before and after the VR-task. After the VR task, compared to the baseline condition, the PSE shifted toward the hand that received the haptic feedback during the interaction (toward the right hand for the congruent condition and toward the left hand for the incongruent condition). This study demonstrated the possibility of inducing spatial biases in the processing of bodily information by modulating the sensory-motor interaction between stimuli in virtual environments (while keeping constant the actual position of the body in space)

    Speaking in front of cartoon avatars: A behavioral and psychophysiological study on how audience design impacts on public speaking anxiety in virtual environments

    No full text
    Public speaking anxiety is defined as a strong difficulty in speaking in front of an audience and has been shown to impair work performance and social relationships. Virtual Reality (VR) offers an efficient tool for modulating public speaking anxiety through a wide range of customizations concerning environmental settings. However, scientific research needs to understand better what features of the simulated environment are more important to increase or reduce participants’ perceived discomfort. The present study investigates the role of visual (human vs cartoon characters) and acoustic (human vs robotic voice) audience features on perceived anxiety, sense of presence, and perceived realism in an interactive VR public speaking scenario. 42 participants (mean age = 24 y. o; Females = 30) performed four public speaking sessions, characterized by different levels (high vs. low) of graphic and acoustic audience design. Both explicit (questionnaires) and implicit physiological measures (Electrodermal activity-EDA) collected during audience interaction were used to assess the participants’ experiences. The results showed that the features of the simulated audience played a crucial role in perceived anxiety during a virtual public speaking. Specifically, the more realistic level of graphic and acoustic stimuli resulted in higher levels of self-reported anxiety as compared to the lower realism level. However, the experienced realism and the sense of presence seem more affected by the graphical than acoustic features of the virtual environment. By contrast, the acoustic features impact on the interaction realism with the virtual audience. Interestingly, the robotic voice (lower acoustic realism) increased electrodermal response during the interaction with the audience, interpreted as a break in the sense of presence. A positive correlation between anxiety, sense of presence, and experienced realism was found. As well, perceived anxiety is correlated with electrodermal activity during the performance. Nevertheless, physiological activity is more affected by the first experience than the realism features, suggesting the presence of a habituation effect across the repeated sessions. Taken together, the results of our study showed that multisensory features (graphical and acoustic) of the virtual environment play a fundamental role in creating realistic public speaking experiences and might be used within gamification strategies for soft skill training (e.g., for improving public speaking anxiety)
    corecore