156 research outputs found

    GiAnt: stereoscopic-compliant multi-scale navigation in VEs

    Get PDF
    International audienceNavigation in multi-scale virtual environments (MSVE) requires the adjustment of the navigation parameters to ensure optimal navigation experiences at each level of scale. In particular, in immersive stereoscopic systems, e.g. when performing zoom-in and zoom-out operations, the navigation speed and the stereoscopic rendering parameters have to be adjusted accordingly. Although this adjustment can be done manually by the user, it can be complex, tedious and strongly depends on the virtual environment. In this work we propose a new multi-scale navigation technique named GiAnt (GIant/ANT) which automatically and seamlessly adjusts the navigation speed and the scale factor of the virtual environment based on the user's perceived navigation speed. The adjustment ensures an almost-constant perceived navigation speed while avoiding diplopia effects or diminished depth perception due to improper stereoscopic rendering configurations. The results from the conducted user evaluation shows that GiAnt is an efficient multi-scale navigation which minimizes the changes of the scale factor of the virtual environment compared to state-of-the-art multi-scale navigation techniques

    VIRTUAL REALITY & SPORT

    Get PDF
    This applied session deals with the design of immersive environments for human motion performance analysis. In a first part of the session, a theoretical presentation describes the aims and scopes of such type of experiments. In a second part of the session, a review of the available immersive systems will be exposed. Finally, a practical framework will be designed in real-time with the attendees: a low-cost immersive environment based on a Microsoft Kinect, a Razer Hydra and an Oculus Rift Head Mounted Display device. We will develop an experiment to analyse perception-action coupling in soccer with simulated virtual opponents enabling to analyse the decision-making of a real goalkeeper

    Move or Push? Studying Pseudo-Haptic Perceptions Obtained with Motion or Force Input

    Full text link
    Pseudo-haptics techniques are interesting alternatives for generating haptic perceptions, which entails the manipulation of haptic perception through the appropriate alteration of primarily visual feedback in response to body movements. However, the use of pseudo-haptics techniques with a motion-input system can sometimes be limited. This paper investigates a novel approach for extending the potential of pseudo-haptics techniques in virtual reality (VR). The proposed approach utilizes a reaction force from force-input as a substitution of haptic cue for the pseudo-haptic perception. The paper introduced a manipulation method in which the vertical acceleration of the virtual hand is controlled by the extent of push-in of a force sensor. Such a force-input manipulation of a virtual body can not only present pseudo-haptics with less physical spaces and be used by more various users including physically handicapped people, but also can present the reaction force proportional to the user's input to the user. We hypothesized that such a haptic force cue would contribute to the pseudo-haptic perception. Therefore, the paper endeavors to investigate the force-input pseudo-haptic perception in a comparison with the motion-input pseudo-haptics. The paper compared force-input and motion-input manipulation in a point of achievable range and resolution of pseudo-haptic weight. The experimental results suggest that the force-input manipulation successfully extends the range of perceptible pseudo-weight by 80\% in comparison to the motion-input manipulation. On the other hand, it is revealed that the motion-input manipulation has 1 step larger number of distinguishable weight levels and is easier to operate than the force-input manipulation.Comment: This paper is now under review for IEEE Transactions on Visualization and Computer Graphic

    Leveraging Tendon Vibration to Enhance Pseudo-Haptic Perceptions in VR

    Full text link
    Pseudo-haptic techniques are used to modify haptic perception by appropriately changing visual feedback to body movements. Based on the knowledge that tendon vibration can affect our somatosensory perception, this paper proposes a method for leveraging tendon vibration to enhance pseudo-haptics during free arm motion. Three experiments were performed to examine the impact of tendon vibration on the range and resolution of pseudo-haptics. The first experiment investigated the effect of tendon vibration on the detection threshold of the discrepancy between visual and physical motion. The results indicated that vibrations applied to the inner tendons of the wrist and elbow increased the threshold, suggesting that tendon vibration can augment the applicable visual motion gain by approximately 13\% without users detecting the visual/physical discrepancy. Furthermore, the results demonstrate that tendon vibration acts as noise on haptic motion cues. The second experiment assessed the impact of tendon vibration on the resolution of pseudo-haptics by determining the just noticeable difference in pseudo-weight perception. The results suggested that the tendon vibration does not largely compromise the resolution of pseudo-haptics. The third experiment evaluated the equivalence between the weight perception triggered by tendon vibration and that by visual motion gain, that is, the point of subjective equality. The results revealed that vibration amplifies the weight perception and its effect was equivalent to that obtained using a gain of 0.64 without vibration, implying that the tendon vibration also functions as an additional haptic cue. Our results provide design guidelines and future work for enhancing pseudo-haptics with tendon vibration.Comment: This paper has been accepted by IEEE TVC

    Electrotactile feedback applications for hand and arm interactions: A systematic review, meta-analysis, and future directions

    Get PDF
    Haptic feedback is critical in a broad range of human-machine/computer-interaction applications. However, the high cost and low portability/wearability of haptic devices remain unresolved issues, severely limiting the adoption of this otherwise promising technology. Electrotactile interfaces have the advantage of being more portable and wearable due to their reduced actuators' size, as well as their lower power consumption and manufacturing cost. The applications of electrotactile feedback have been explored in human-computer interaction and human-machine-interaction for facilitating hand-based interactions in applications such as prosthetics, virtual reality, robotic teleoperation, surface haptics, portable devices, and rehabilitation. This paper presents a technological overview of electrotactile feedback, as well a systematic review and meta-analysis of its applications for hand-based interactions. We discuss the different electrotactile systems according to the type of application. We also discuss over a quantitative congregation of the findings, to offer a high-level overview into the state-of-art and suggest future directions. Electrotactile feedback systems showed increased portability/wearability, and they were successful in rendering and/or augmenting most tactile sensations, eliciting perceptual processes, and improving performance in many scenarios. However, knowledge gaps (e.g., embodiment), technical (e.g., recurrent calibration, electrodes' durability) and methodological (e.g., sample size) drawbacks were detected, which should be addressed in future studies.Comment: 18 pages, 1 table, 8 figures, under review in Transactions on Haptics. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.Upon acceptance of the article by IEEE, the preprint article will be replaced with the accepted versio

    Cybersickness in Virtual Reality Questionnaire (CSQ-VR):A validation and comparison against SSQ and VRSQ

    Get PDF
    Cybersickness is a drawback of virtual reality (VR), which also affects the cognitive and motor skills of the users. The Simulator Sickness Questionnaire (SSQ), and its variant, the Virtual Reality Sickness Questionnaire (VRSQ) are two tools that measure cybersickness. However, both tools suffer from important limitations, which raises concerns about their suitability. Two versions of the Cybersickness in VR Questionnaire (CSQ-VR), a paper-and-pencil and a 3D –VR version, were developed. Validation and comparison of CSQ-VR against SSQ and VRSQ were performed. Thirty-nine participants were exposed to 3 rides with linear and angular accelerations in VR. Assessments of cognitive and psychomotor skills were performed at baseline and after each ride. The validity of both versions of CSQ_VR was confirmed. Notably, CSQ-VR demonstrated substantially better internal consistency than both SSQ and VRSQ. Also, CSQ-VR scores had significantly better psychometric properties in detecting a temporary decline in performance due to cybersickness. Pupil size was a significant predictor of cybersickness intensity. In conclusion, the CSQ-VR is a valid assessment of cybersickness, with superior psychometric properties to SSQ and VRSQ. The CSQ-VR enables the assessment of cybersickness during VR exposure, and it benefits from examining pupil size, a biomarker of cybersickness.  </p

    Avatar et Sentiment d'Incarnation : Étude de la prĂ©fĂ©rence relative entre l'apparence, le contrĂŽle et le point de vue

    Get PDF
    International audienceEn réalité virtuelle, un certain nombre d'études ont été menées pour évaluer l'influence de l'apparence de l'avatar, du contrÎle de l'avatar et du point de vue de l'utilisateur sur le sentiment d'incarnation d'un avatar virtuel. Cependant, ces études ont tendance à explorer chaque facteur de maniÚre isolée. Cet article vise à mieux comprendre les interrelations entre ces trois facteurs en menant une expérience d'appariement subjectif. Dans l'expérience présentée (n=40), les participants devaient retrouver un sentiment d'incarnation élevé resenti dans une configuration d'avatar optimale (avatar réaliste, capture de mouvements du corps entier, point de vue à la premiÚre personne), en commençant par une configuration minimale (avatar minimal, aucun contrÎle, point de vue de la troisiÚme personne), et en augmentant itérativement le niveau de chaque facteur. Les choix des participants donnent un aperçu de leurs préférences et de leur perception des trois facteurs considérés. De plus, la procédure d'appariement subjectif a été menée dans le cadre de quatre tùches d'interaction différentes dans le but de couvrir un large éventail d'actions qu'un utilisateur peut effectuer à travers un avatar dans un environnement virtuel. Les résultats de l'expérience d'appariement subjectif montrent que les niveaux de point de vue et de contrÎle ont été constamment augmentés par les utilisateurs avant les niveaux d'apparence lorsqu'il s'agit d'améliorer l'incarnation. Ensuite, plusieurs configurations ont été identifiées avec des sentiments d'incarnation équivalents à celui ressenti dans la configuration optimale, mais qui varient entre les tùches. Pris ensemble, nos résultats fournissent des indications précieuses sur les facteurs à privilégier pour améliorer le sentiment d'incarnation envers un avatar dans différentes tùches, et sur les configurations qui permettent de donner une incarnation suffisante dans l'environnement virtuel

    Audio-Visual Attractors for Capturing Attention to the Screens When Walking in CAVE Systems

    Get PDF
    International audienceIn four-sided CAVE-like VR systems, the absence of the rear wall has been shown to decrease the level of immersion and can introduce breaks in presence. In this paper it is investigated to which extent user's attention can be driven by visual and auditory stimuli in a four-sided CAVE-like system. An experiment was conducted in order to analyze how user attention is diverted while physically walking in a virtual environment, when audio and/or visual attractors are present. The foursided CAVE used in the experiment allowed to walk up to 9m in straight line. An additional key feature in the experiment is the fact that auditory feedback was delivered through binaural audio rendering techniques via non-personalized head related transfer functions (HRTFs). The audio rendering was dependent on the user's head position and orientation, enabling localized sound rendering. The experiment analyzed how different "attractors" (audio and/or visual, static or dynamic) modify the user's attention. The results of the conducted experiment show that audio-visual attractors are the most efficient attractors in order to keep the user's attention toward the inside of the CAVE. The knowledge gathered in the experiment can provide guidelines to the design of virtual attractors in order to keep the attention of the user and avoid the "missing wall". Index Terms: Audi
    • 

    corecore