11,761 research outputs found

    A Comparison of Head-Mounted Displays vs. Large-Screen Displays for an Interactive Pedestrian Simulator

    Get PDF
    This investigation compared how people performed a complex perception-action task – crossing trafficfilled roadways – in a CAVE vs. an HMD virtual environment. Participants physically crossed a virtual roadway with continuous cross traffic in either a CAVE-like or an HTC Vive pedestrian simulator. The 3D model and traffic scenario were identical in both simulators, allowing for a direct comparison between the two display systems. We found that participants in the Vive group accepted smaller gaps for crossing than participants in the CAVE group. They also timed their entry into the gap more precisely and tended to cross somewhat more quickly. As a result, participants in the Vive group had a somewhat larger margin of safety when they exited the roadway than those in the CAVE group. Participants in the CAVE group focused their gaze further down the road and had more variability in their gaze distances. The results provide a foundation for future studies of pedestrian behavior and other tasks involving full-body motion using HMD-based VR

    Interaction and presentation techniques for shake menus in tangible augmented reality

    Full text link
    Menus play an important role in both information presentation and system control. We explore the design space of shake menus, which are intended for use in tangible augmented reality. Shake menus are radial menus displayed centered on a physical object and activated by shaking that object. One important aspect of their design space is the coordinate system used to present menu op-tions. We conducted a within-subjects user study to compare the speed and efficacy of several alternative methods for presenting shake menus in augmented reality (world-referenced, display-referenced, and object-referenced), along with a baseline tech-nique (a linear menu on a clipboard). Our findings suggest trade-offs amongst speed, efficacy, and flexibility of interaction, and point towards the possible advantages of hybrid approaches that compose together transformations in different coordinate systems. We close by describing qualitative feedback from use and present several illustrative applications of the technique

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    Intelligent tutoring in virtual reality for highly dynamic pedestrian safety training

    Get PDF
    This thesis presents the design, implementation, and evaluation of an Intelligent Tutoring System (ITS) with a Virtual Reality (VR) interface for child pedestrian safety training. This system enables children to train practical skills in a safe and realistic virtual environment without the time and space dependencies of traditional roadside training. This system also employs Domain and Student Modelling techniques to analyze user data during training automatically and to provide appropriate instructions and feedback. Thus, the traditional requirement of constant monitoring from teaching personnel is greatly reduced. Compared to previous work, especially the second aspect is a principal novelty for this domain. To achieve this, a novel Domain and Student Modeling method was developed in addition to a modular and extensible virtual environment for the target domain. While the Domain and Student Modeling framework is designed to handle the highly dynamic nature of training in traffic and the ill-defined characteristics of pedestrian tasks, the modular virtual environment supports different interaction methods and a simple and efficient way to create and adapt exercises. The thesis is complemented by two user studies with elementary school children. These studies testify great overall user acceptance and the system’s potential for improving key pedestrian skills through autonomous learning. Last but not least, the thesis presents experiments with different forms of VR input and provides directions for future work.Diese Arbeit behandelt den Entwurf, die Implementierung sowie die Evaluierung eines intelligenten Tutorensystems (ITS) mit einer Virtual Reality (VR) basierten Benutzeroberfläche zum Zwecke von Verkehrssicherheitstraining für Kinder. Dieses System ermöglicht es Kindern praktische Fähigkeiten in einer sicheren und realistischen Umgebung zu trainieren, ohne den örtlichen und zeitlichen Abhängigkeiten des traditionellen, straßenseitigen Trainings unterworfen zu sein. Dieses System macht außerdem von Domain und Student Modelling Techniken gebrauch, um Nutzerdaten während des Trainings zu analysieren und daraufhin automatisiert geeignete Instruktionen und Rückmeldung zu generieren. Dadurch kann die bisher erforderliche, ständige Überwachung durch Lehrpersonal drastisch reduziert werden. Verglichen mit bisherigen Lösungen ist insbesondere der zweite Aspekt eine grundlegende Neuheit für diesen Bereich. Um dies zu erreichen wurde ein neuartiges Framework für Domain und Student Modelling entwickelt, sowie eine modulare und erweiterbare virtuelle Umgebung für diese Art von Training. Während das Domain und Student Modelling Framework so entworfen wurde, um mit der hohen Dynamik des Straßenverkehrs sowie den vage definierten Fußgängeraufgaben zurecht zu kommen, unterstützt die modulare Umgebung unterschiedliche Eingabeformen sowie eine unkomplizierte und effiziente Methode, um Übungen zu erstellen und anzupassen. Die Arbeit beinhaltet außerdem zwei Nutzerstudien mit Grundschulkindern. Diese Studien belegen dem System eine hohe Benutzerakzeptanz und stellt das Potenzial des Systems heraus, wichtige Fähigkeiten für Fußgängersicherheit durch autodidaktisches Training zu verbessern. Nicht zuletzt beschreibt die Arbeit Experimente mit verschiedenen Formen von VR Eingaben und zeigt die Richtung für zukünftige Arbeit auf

    Text Entry Performance and Situation Awareness of a Joint Optical See-Through Head-Mounted Display and Smartphone System

    Full text link
    Optical see-through head-mounted displays (OST HMDs) are a popular output medium for mobile Augmented Reality (AR) applications. To date, they lack efficient text entry techniques. Smartphones are a major text entry medium in mobile contexts but attentional demands can contribute to accidents while typing on the go. Mobile multi-display ecologies, such as combined OST HMD-smartphone systems, promise performance and situation awareness benefits over single-device use. We study the joint performance of text entry on mobile phones with text output on optical see-through head-mounted displays. A series of five experiments with a total of 86 participants indicate that, as of today, the challenges in such a joint interactive system outweigh the potential benefits.Comment: To appear in IEEE Transactions on Visualization and Computer Graphics On page(s): 1-17 Print ISSN: 1077-2626 Online ISSN: 1077-262

    Brief report: A pilot study of the use of a virtual reality headset in autism populations

    Get PDF
    The application of virtual reality technologies (VRTs) for users with autism spectrum disorder (ASD) has been studied for decades. However, a gap remains in our understanding surrounding VRT head-mounted displays (HMDs). As newly designed HMDs have become commercially available (in this study the Oculus Rift™) the need to investigate newer devices is immediate. This study explored willingness, acceptance, sense of presence and immersion of ASD participants. Results revealed that all 29 participants (mean age=32; 33% with IQ< 70) were willing to wear the HMD. The majority of the participants reported an enjoyable experience, high levels of ‘presence’, and were likely to use HMDs again. IQ was found to be independent of the willingness to use HMDs and related VRT immersion experience
    corecore