13 research outputs found

    Multimodal metaphors for generic interaction tasks in virtual environments

    Full text link
    Virtual Reality (VR) Systeme bieten zusätzliche Ein- und Ausgabekanäle für die Interaktion zwischen Mensch und Computer in virtuellen Umgebungen. Solche VR Technologien ermöglichen den Anwendern bessere Einblicke in hochkomplexe Datenmengen, stellen allerdings auch hohe Anforderungen an den Benutzer bezüglich der Fähigkeiten mit virtuellen Objekten zu interagieren. In dieser Arbeit werden sowohl die Entwicklung und Evaluierung neuer multimodaler Interaktionsmetaphern für generische Interaktionsaufgaben in virtuellen Umgebungen vorgestellt und diskutiert. Anhand eines VR Systems wird der Einsatz dieser Konzepte an zwei Fallbeispielen aus den Domänen der 3D-Stadtvisualisierung und seismischen Volumendarstellung aufgezeigt

    In The Blink Of An Eye - Leveraging Blink-Induced Suppression For Imperceptible Position And Orientation Redirection In Virtual Reality

    No full text
    Immersive computer-generated environments (aka virtual reality, VR) are limited by the physical space around them, e. g., enabling natural walking in VR is only possible by perceptually-inspired locomotion techniques such as redirected walking (RDW). We introduce a completely new approach to imperceptible position and orientation redirection that takes advantage of the fact that even healthy humans are functionally blind for circa ten percent of the time under normal circumstances due to motor processes preventing light from reaching the retina (such as eye blinks) or perceptual processes suppressing degraded visual information (such as blink-induced suppression). During such periods of missing visual input, change blindness occurs, which denotes the inability to perceive a visual change such as the motion of an object or self-motion of the observer.We show that this phenomenon can be exploited in VR by synchronizing the computer graphics rendering system with the human visual processes for imperceptible camera movements, in particular to implement position and orientation redirection.We analyzed human sensitivity to such visual changes with detection thresholds, which revealed that commercial off-the-shelf eye trackers and head-mounted displays suffice to translate a user by circa 4 - 9 cm and rotate the user by circa 2 - 5 degrees in any direction, which could be accumulated each time the user blinks. Moreover, we show the potential for RDW, whose performance could be improved by approximately 50 % when using our technique

    Shrinking Circles: Adaptation to Increased Curvature Gain in Redirected Walking

    Full text link
    Real walking is the most natural way to locomote in virtual reality (VR), but a confined physical walking space limits its applicability. Redirected walking (RDW) is a collection of techniques to solve this problem. One of these techniques aims to imperceptibly rotate the user’s view of the virtual scene in order to steer her along a confined path whilst giving the impression of walking in a straight line in a large virtual space. Measurement of perceptual thresholds for the detection of such a modified curvature gain have indicated a radius that is still larger than most room sizes. Since the brain is an adaptive system and thresholds usually depend on previous stimulations, we tested if prolonged exposure to an immersive virtual environment (IVE) with increased curvature gain produces adaptation to that gain and modifies thresholds such that, over time, larger curvature gains can be applied for RDW. Therefore, participants first completed a measurement of their perceptual threshold for curvature gain. In a second session, the same participants were exposed to an IVE with a constant curvature gain in which they walked between two targets for about 20 minutes. Afterwards, their perceptual thresholds were measured again. The results show that the psychometric curves shifted after the exposure session and perceptual thresholds for increased curvature gain further increased. The increase of the detection threshold suggests that participants adapt to the manipulation and stronger curvature gains can be applied in RDW, and therefore improves its applicability in such situations

    REAL-TIME RENDERING OF WEATHER-RELATED PHENOMENA IN DIGITAL 3D URBAN MODELS

    No full text
    Abstract: General interest in visualizations of digital 3D city models is growing rapidly, and several applications are already available that display such models very realistically. Many authors have emphasized the importance of the effects of realistic illumination for computer generated images, and this applies especially to the context of 3D city visualization. However, current 3D city visualization applications rarely implement techniques for achieving realistic illumination, in particular the effects caused by current weather-related phenomena. At most, some geospatial visualization systems render artificial skies—sometimes with a georeferenced determination of the sun position—to give the user the impression of a real sky. However, such artificial renderings are not sufficient for real simulation purposes. In this paper we present techniques to augment visualizations of digital 3D city models with real-time display of georeferenced meteorological phenomena. For this purpose we retrieve weather information from different sources, i. e., real-time images from cameras and radar data from web-based weather services, and we use this information in the rendering process for realistic visualization of different weather-related issues, such as clouds, rain, fog, etc. Our approach is not limited to a specific setup, and we have evaluated the results in a user study presented in this paper.

    A Comparison of Eye Tracking Latencies Among Several Commercial Head-Mounted Displays

    Get PDF
    A number of virtual reality head-mounted displays (HMDs) with integrated eye trackers have recently become commercially available. If their eye tracking latency is low and reliable enough for gaze-contingent rendering, this may open up many interesting opportunities for researchers. We measured eye tracking latencies for the Fove-0, the Varjo VR-1, and the High Tech Computer Corporation (HTC) Vive Pro Eye using simultaneous electrooculography measurements. We determined the time from the occurrence of an eye position change to its availability as a data sample from the eye tracker (delay) and the time from an eye position change to the earliest possible change of the display content (latency). For each test and each device, participants performed 60 saccades between two targets 20 degree of visual angle apart. The targets were continuously visible in the HMD, and the saccades were instructed by an auditory cue. Data collection and eye tracking calibration were done using the recommended scripts for each device in Unity3D. The Vive Pro Eye was recorded twice, once using the SteamVR SDK and once using the Tobii XR SDK. Our results show clear differences between the HMDs. Delays ranged from 15 ms to 52 ms, and the latencies ranged from 45 ms to 81 ms. The Fove-0 appears to be the fastest device and best suited for gaze-contingent rendering
    corecore