32 research outputs found

    Toward an Automatic Prediction of the Sense of Presence in Virtual Reality Environment

    Get PDF
    International audienceIn human-agent interaction, one key challenge is the evaluation of the user's experience. In the virtual reality domain, the sense of presence and co-presence, reflecting the psychological immersion of the user, is generally assessed through well-grounded subjective post-experience questionnaires. In this article, we aim at presenting a new way to automatically predict the sense of presence and co-presence of a user at the end of an interaction based on specific verbal and non-verbal behavioral cues automatically computed. A random forest algorithm has been applied on a human-agent interaction corpus collected in the specific context of a virtual environment developed to train doctors to break bad news to a virtual patient. The performance of the models demonstrate the capacity to automatically and accurately predict the level of presence and co-presence, but also show the relevancy of the verbal and non-verbal behavioral cues as objective measures of presence

    Multimodal behavioral cues analysis of the sense of presence and co-presence during a social interaction with a virtual patient

    Full text link
    International audienceA key challenge when studying human-agent interaction, is the evaluation of user's experience. In virtual reality, this question is addressed through the study of the sense of presence and copresence, generally assessed thanks to well-grounded subjective post-experience questionnaires. In this article, we aim at correlating objective multimodal cues produced by users to their subjective sense of presence and co-presence. Our study is based on a human-agent interaction corpus collected in task-oriented context: a virtual environment aiming at training doctors to break bad news to a patient played by a virtual agent. Based on a corpus study, we have used machine learning approaches to explore the possibility of automatically predicting the sense of presence and co-presence of the user thanks to specific multimodal behavioral cues. The performance of random forests models demonstrates the capacity to automatically and accurately predict the level of presence. It also shows the relevance of a multimodal model, based on verbal and non-verbal behavioral cues as objective measures of presence

    Impact du Comportement Non Verbal de l'Audience Virtuelle sur la Perception des Attitudes Sociales par les Utilisateurs

    Full text link
    International audienceIn a virtual reality public speaking training system, it is essential to control the audience's nonverbal behavior in order to simulate different attitudes. The virtual audience's social attitude is generally represented by a two-dimensional valence-arousal model describing the opinion and engagement of virtual characters. In this article, we argue that the valence-arousal representation is not sufficient to describe the user's perception of a virtual character's social attitude. We propose a three-dimensional model by dividing the valence axis into two dimensions representing the epistemic and affective stance of the virtual character, reflecting the character's agreement and emotional reaction. To assess the perception of the virtual characters' nonverbal behavior on these two new dimensions, we conducted a perceptive study in virtual reality with 44 participants who evaluated 50 animations combining multimodal nonverbal behavioral signals such as head movements, facial expressions, gaze direction and body posture. The results of our experiment show that, in fact, the valence axis should be divided into two axes to take into account the perception of the virtual character's epistemic and affective stance. Furthermore, the results show that one behavioral signal is predominant for the evaluation of each dimension: head movements for the epistemic dimension and facial expressions for the affective dimension. These results provide useful guidelines for designing the nonverbal behavior of a virtual audience for social attitudes' simulation

    Cross-linguistic gender congruency effects during lexical access in novice L2 learners: evidence from ERPs

    Full text link
    International audienceHerein we present electrophysiological evidence of extremely rapid learning of new labels in a second language (L2) for existing concepts, via computerized games. However, the effect was largely constrained by crosslinguistic grammatical gender congruency. We recorded ERPs both prior to exposure with the L2 and following a 4-day training session. Prior to exposure, no modulation of the N400 component was found as a function of the correct Match vs. Mismatch of audio presentation of words and their associated images. Post-training, a large N400 effect emerged for Mismatch compared to Match trials, but only for trials on which the L2 words shared grammatical gender in the learners' L1. Behavioral results showed that all L2 words were learned equally as well, independent of gender congruency across the L1 and the L2. The results demonstrate that crosslinguistic grammatical gender congruency influences lexical activation during the initial stages of establishing a new L2 lexico

    Hand Trajectory Analysis in the Study of Premotor Activity: An Exploration

    Full text link
    International audienceHere we present a first step in the exploration of the use of movement trajectory data in single-trial EEG analysis, based on the hypothesis that certain features of the actual movement to be executed may be reflected in premotor activity. If such features can be effectively characterized, they may assist the study of premotor activity, present in MEEG signals, at the single-trial basis

    It sounds real when you see it. Realistic sound source simulation in multimodal virtual environments

    Full text link
    International audienceDesigning multimodal virtual environments promises revolutionary advances in interacting with computers in the near future. In this paper, we report the results of an experimental investigation on the possible use of surround-sound systems to support visualization, taking advantage of increased knowledge about how spatial perception and attention work in the human brain. We designed two auditory-visual cross-modal experiments, where noise bursts and light-blobs were presented synchronously, but with spatial offsets. We presented sounds in two ways: using free field sounds and using a stereo speaker set. Participants were asked to localize the direction of sound sources. In the first experiment visual stimuli were displaced vertically relative to the sounds, in the second experiment we used horizontal offsets. We found that, in both experiments, sounds were mislocalized in the direction of the visual stimuli in each condition (ventriloquism effect), but this effect was stronger when visual stimuli were displaced vertically, as compared to horizontally. Moreover we found that the ventriloquism effect is strongest for centrally presented sounds. The analyses revealed a variation between different sound presentation modes. We explain our results from the viewpoint of multimodal interface design. These findings draw attention to the importance of cognitive features of multimodal perception in the design of virtual environment setups and may help to open new ways to more realistic surround based multimodal virtual reality simulations

    Interaction between reference frames during subjective vertical estimates in a tilted immersive virtual environment

    Full text link
    International audienceNumerous studies highlighted the influence of a tilted visual frame on the perception of the visual vertical ('rod-and-frame effect' or RFE). Here, we investigated whether this influence can be modified in a virtual immersive environment (CAVE-like) by the structure of the visual scene and by the adjustment mode allowing visual or visuo-kinaesthetic control (V and VK mode, respectively). The way this influence might dynamically evolve throughout the adjustment was also investigated in two groups of subjects with the head unrestrained or restrained upright. RFE observed in the immersive environment was qualitatively comparable to that obtained in a real display (portable rod-and-frame test; Oltman 1968, Perceptual and Motor Skills 26 503-506). Moreover, RFE in the immersive environment appeared significantly influenced by the structure of the visual scene and by the adjustment mode: the more geometrical and meaningful 3-D features the visual scene contained, the greater the RFE. The RFE was also greater when the subjective vertical was assessed under visual control only, as compared to visuo-kinaesthetic control. Furthermore, the results showed a significant RFE increase throughout the adjustment, indicating that the influence of the visual scene upon subjective vertical might dynamically evolve over time. The latter effect was more pronounced for structured visual scenes and under visuo-kinaesthetic control. On the other hand, no difference was observed between the two groups of subjects having the head restrained or unrestrained. These results are discussed in terms of dynamic combination between coexisting reference frames for spatial orientation
    corecore