4 research outputs found

    Beyond Halo and Wedge: Visualizing out-of-view objects on head-mounted virtual and augmented reality devices

    Get PDF
    Head-mounted devices (HMDs) for Virtual and Augmented Reality (VR/AR) enable us to alter our visual perception of the world. However, current devices suffer from a limited field of view (FOV), which becomes problematic when users need to locate out of view objects (e.g., locating points-of-interest during sightseeing). To address this, we developed and evaluated in two studies HaloVR, WedgeVR, HaloAR and WedgeAR, which are inspired by usable 2D off-screen object visualization techniques (Halo, Wedge). While our techniques resulted in overall high usability, we found the choice of AR or VR impacts mean search time (VR: 2.25s, AR: 3.92s) and mean direction estimation error (VR: 21.85°, AR: 32.91°). Moreover, while adding more out-of-view objects significantly affects search time across VR and AR, direction estimation performance remains unaffected. We provide implications and discuss the challenges of designing for VR and AR HMDs

    Multimedia Vocabularies on the Semantic Web

    Get PDF
    This document gives an overview on the state-of-the-art of multimedia metadata formats. Initially, practical relevant vocabularies for developers of Semantic Web applications are listed according to their modality scope. In the second part of this document, the focus is set on the integration of the multimedia vocabularies into the Semantic Web, that is to say, formal representations of the vocabularies are discussed

    RadialLight: Exploring radial peripheral LEDs for directional cues in head-mounted displays

    Get PDF
    Current head-mounted displays (HMDs) for Virtual Reality (VR) and Augmented Reality (AR) have a limited field-of-view (FOV). This limited FOV further decreases the already restricted human visual range and amplifies the problem of objects going out of view. Therefore, we explore the utility of augmenting HMDs with RadialLight, a peripheral light display implemented as 18 radially positioned LEDs around each eye to cue direction towards out-of-view objects. We first investigated direction estimation accuracy of multi-colored cues presented on one versus two eyes. We then evaluated direction estimation accuracy and search time performance for locating out-of-view objects in two representative 360° video VR scenarios. Key findings show that participants could not distinguish between LED cues presented to one or both eyes simultaneously, participants estimated LED cue direction within a maximum 11.8° average deviation, and out-of-view objects in less distracting scenarios were selected faster. Furthermore, we provide implications for building peripheral HMDs
    corecore