1,940 research outputs found

    Spherical tangible user interfaces in mixed reality

    Get PDF
    The popularity of virtual reality (VR) and augmented reality (AR) has grown rapidly in recent years, both in academia and commercial applications. This is rooted in technological advances and affordable head-mounted displays (HMDs). Whether in games or professional applications, HMDs allow for immersive audio-visual experiences that transport users to compelling digital worlds or convincingly augment the real world. However, as true to life as these experiences have become in a visual and auditory sense, the question remains how we can model interaction with these virtual environments in an equally natural way. Solutions providing intuitive tangible interaction would bear the potential to fundamentally make the mixed reality (MR) spectrum more accessible, especially for novice users. Research on tangible user interfaces (TUIs) has pursued this goal by coupling virtual to real-world objects. Tangible interaction has been shown to provide significant advantages for numerous use cases. Spherical tangible user interfaces (STUIs) present a special case of these devices, mainly due to their ability to fully embody any spherical virtual content. In general, spherical devices increasingly transition from mere technology demonstrators to usable multi-modal interfaces. For this dissertation, we explore the application of STUIs in MR environments primarily by comparing them to state-of-the-art input techniques in four different contexts. Thus, investigating the questions of embodiment, overall user performance, and the ability of STUIs relying on their shape alone to support complex interaction techniques. First, we examine how spherical devices can embody immersive visualizations. In an initial study, we test the practicality of a tracked sphere embodying three kinds of visualizations. We examine simulated multi-touch interaction on a spherical surface and compare two different sphere sizes to VR controllers. Results confirmed our prototype's viability and indicate improved pattern recognition and advantages for the smaller sphere. Second, to further substantiate VR as a prototyping technology, we demonstrate how a large tangible spherical display can be simulated in VR. We show how VR can fundamentally extend the capabilities of real spherical displays by adding physical rotation to a simulated multi-touch surface. After a first study evaluating the general viability of simulating such a display in VR, our second study revealed the superiority of a rotating spherical display. Third, we present a concept for a spherical input device for tangible AR (TAR). We show how such a device can provide basic object manipulation capabilities utilizing two different modes and compare it to controller techniques with increasing hardware complexity. Our results show that our button-less sphere-based technique is only outperformed by a mode-less controller variant that uses physical buttons and a touchpad. Fourth, to study the intrinsic problem of VR locomotion, we explore two opposing approaches: a continuous and a discrete technique. For the first, we demonstrate a spherical locomotion device supporting two different locomotion paradigms that propel a user's first-person avatar accordingly. We found that a position control paradigm applied to a sphere performed mostly superior in comparison to button-supported controller interaction. For discrete locomotion, we evaluate the concept of a spherical world in miniature (SWIM) used for avatar teleportation in a large virtual environment. Results showed that users subjectively preferred the sphere-based technique over regular controllers and on average, achieved lower task times and higher accuracy. To conclude the thesis, we discuss our findings, insights, and subsequent contribution to our central research questions to derive recommendations for designing techniques based on spherical input devices and an outlook on the future development of spherical devices in the mixed reality spectrum.Die PopularitĂ€t von Virtual Reality (VR) und Augmented Reality (AR) hat in den letzten Jahren rasant zugenommen, sowohl im akademischen Bereich als auch bei kommerziellen Anwendungen. Dies ist in erster Linie auf technologische Fortschritte und erschwingliche Head-Mounted Displays (HMDs) zurĂŒckzufĂŒhren. Ob in Spielen oder professionellen Anwendungen, HMDs ermöglichen immersive audiovisuelle Erfahrungen, die uns in fesselnde digitale Welten versetzen oder die reale Welt ĂŒberzeugend erweitern. Doch so lebensecht diese Erfahrungen in visueller und auditiver Hinsicht geworden sind, so bleibt doch die Frage, wie die Interaktion mit diesen virtuellen Umgebungen auf ebenso natĂŒrliche Weise gestaltet werden kann. Lösungen, die eine intuitive, greifbare Interaktion ermöglichen, hĂ€tten das Potenzial, das Spektrum der Mixed Reality (MR) fundamental zugĂ€nglicher zu machen, insbesondere fĂŒr Unerfahrene. Die Forschung an Tangible User Interfaces (TUIs) hat dieses Ziel durch das Koppeln virtueller und realer Objekte verfolgt und so hat sich gezeigt, dass greifbare Interaktion fĂŒr zahlreiche AnwendungsfĂ€lle signifikante Vorteile bietet. Spherical Tangible User Interfaces (STUIs) stellen einen Spezialfall von greifbaren Interfaces dar, insbesondere aufgrund ihrer FĂ€higkeit, beliebige sphĂ€rische virtuelle Inhalte vollstĂ€ndig verkörpern zu können. Generell entwickeln sich sphĂ€rische GerĂ€te zunehmend von reinen Technologiedemonstratoren zu nutzbaren multimodalen Instrumenten, die auf eine breite Palette von Interaktionstechniken zurĂŒckgreifen können. Diese Dissertation untersucht primĂ€r die Anwendung von STUIs in MR-Umgebungen durch einen Vergleich mit State-of-the-Art-Eingabetechniken in vier verschiedenen Kontexten. Dies ermöglicht die Erforschung der Bedeutung der Verkörperung virtueller Objekte, der Benutzerleistung im Allgemeinen und der FĂ€higkeit von STUIs, die sich lediglich auf ihre Form verlassen, komplexe Interaktionstechniken zu unterstĂŒtzen. ZunĂ€chst erforschen wir, wie sphĂ€rische GerĂ€te immersive Visualisierungen verkörpern können. Eine erste Studie ergrĂŒndet die Praxistauglichkeit einer einfach konstruierten, getrackten Kugel, die drei Arten von Visualisierungen verkörpert. Wir testen simulierte Multi-Touch-Interaktion auf einer sphĂ€rischen OberflĂ€che und vergleichen zwei KugelgrĂ¶ĂŸen mit VR-Controllern. Die Ergebnisse bestĂ€tigten die Praxistauglichkeit des Prototyps und deuten auf verbesserte Mustererkennung sowie Vorteile fĂŒr die kleinere Kugel hin. Zweitens, um die ValiditĂ€t von VR als Prototyping-Technologie zu bekrĂ€ftigen, demonstrieren wir, wie ein großes, anfassbares sphĂ€risches Display in VR simuliert werden kann. Es zeigt sich, wie VR die Möglichkeiten realer sphĂ€rischer Displays substantiell erweitern kann, indem eine simulierte Multi-Touch-OberflĂ€che um die FĂ€higkeit der physischen Rotation ergĂ€nzt wird. Nach einer ersten Studie, die die generelle Machbarkeit der Simulation eines solchen Displays in VR evaluiert, zeigte eine zweite Studie die Überlegenheit des drehbaren sphĂ€rischen Displays. Drittens prĂ€sentiert diese Arbeit ein Konzept fĂŒr ein sphĂ€risches EingabegerĂ€t fĂŒr Tangible AR (TAR). Wir zeigen, wie ein solches Werkzeug grundlegende FĂ€higkeiten zur Objektmanipulation unter Verwendung von zwei verschiedenen Modi bereitstellen kann und vergleichen es mit Eingabetechniken deren HardwarekomplexitĂ€t zunehmend steigt. Unsere Ergebnisse zeigen, dass die kugelbasierte Technik, die ohne Knöpfe auskommt, nur von einer Controller-Variante ĂŒbertroffen wird, die physische Knöpfe und ein Touchpad verwendet und somit nicht auf unterschiedliche Modi angewiesen ist. Viertens, um das intrinsische Problem der Fortbewegung in VR zu erforschen, untersuchen wir zwei gegensĂ€tzliche AnsĂ€tze: eine kontinuierliche und eine diskrete Technik. FĂŒr die erste prĂ€sentieren wir ein sphĂ€risches EingabegerĂ€t zur Fortbewegung, das zwei verschiedene Paradigmen unterstĂŒtzt, die einen First-Person-Avatar entsprechend bewegen. Es zeigte sich, dass das Paradigma der direkten Positionssteuerung, angewandt auf einen Kugel-Controller, im Vergleich zu regulĂ€rer Controller-Interaktion, die zusĂ€tzlich auf physische Knöpfe zurĂŒckgreifen kann, meist besser abschneidet. Im Bereich der diskreten Fortbewegung evaluieren wir das Konzept einer kugelförmingen Miniaturwelt (Spherical World in Miniature, SWIM), die fĂŒr die Avatar-Teleportation in einer großen virtuellen Umgebung verwendet werden kann. Die Ergebnisse zeigten eine subjektive Bevorzugung der kugelbasierten Technik im Vergleich zu regulĂ€ren Controllern und im Durchschnitt eine schnellere Lösung der Aufgaben sowie eine höhere Genauigkeit. Zum Abschluss der Arbeit diskutieren wir unsere Ergebnisse, Erkenntnisse und die daraus resultierenden BeitrĂ€ge zu unseren zentralen Forschungsfragen, um daraus Empfehlungen fĂŒr die Gestaltung von Techniken auf Basis kugelförmiger EingabegerĂ€te und einen Ausblick auf die mögliche zukĂŒnftige Entwicklung sphĂ€rischer EingabegrĂ€te im Mixed-Reality-Bereich abzuleiten

    A Virtual Testbed for Fish-Tank Virtual Reality: Improving Calibration with a Virtual-in-Virtual Display

    Get PDF
    With the development of novel calibration techniques for multimedia projectors and curved projection surfaces, volumetric 3D displays are becoming easier and more affordable to build. The basic requirements include a display shape that defines the volume (e.g. a sphere, cylinder, or cuboid) and a tracking system to provide each user's location for the perspective corrected rendering. When coupled with modern graphics cards, these displays are capable of high resolution, low latency, high frame rate, and even stereoscopic rendering; however, like many previous studies have shown, every component must be precisely calibrated for a compelling 3D effect. While human perceptual requirements have been extensively studied for head-tracked displays, most studies featured seated users in front of a flat display. It remains unclear if results from these flat display studies are applicable to newer, walk-around displays with enclosed or curved shapes. To investigate these issues, we developed a virtual testbed for volumetric head-tracked displays that can measure calibration accuracy of the entire system in real-time. We used this testbed to investigate visual distortions of prototype curved displays, improve existing calibration techniques, study the importance of stereo to performance and perception, and validate perceptual calibration with novice users. Our experiments show that stereo is important for task performance, but requires more accurate calibration, and that novice users can make effective use of perceptual calibration tools. We also propose a novel, real-time calibration method that can be used to fine-tune an existing calibration using perceptual feedback. The findings from this work can be used to build better head-tracked volumetric displays with an unprecedented amount of 3D realism and intuitive calibration tools for novice users

    Future Directions in Astronomy Visualisation

    Full text link
    Despite the large budgets spent annually on astronomical research equipment such as telescopes, instruments and supercomputers, the general trend is to analyse and view the resulting datasets using small, two-dimensional displays. We report here on alternative advanced image displays, with an emphasis on displays that we have constructed, including stereoscopic projection, multiple projector tiled displays and a digital dome. These displays can provide astronomers with new ways of exploring the terabyte and petabyte datasets that are now regularly being produced from all-sky surveys, high-resolution computer simulations, and Virtual Observatory projects. We also present a summary of the Advanced Image Displays for Astronomy (AIDA) survey which we conducted from March-May 2005, in order to raise some issues pertitent to the current and future level of use of advanced image displays.Comment: 13 pages, 2 figures, accepted for publication in PAS

    Friction surfaces: scaled ray-casting manipulation for interacting with 2D GUIs

    Get PDF
    The accommodation of conventional 2D GUIs with Virtual Environments (VEs) can greatly enhance the possibilities of many VE applications. In this paper we present a variation of the well-known ray-casting technique for fast and accurate selection of 2D widgets over a virtual window immersed into a 3D world. The main idea is to provide a new interaction mode where hand rotations are scaled down so that the ray is constrained to intersect the active virtual window. This is accomplished by changing the control-display ratio between the orientation of the user’s hand and the ray used for selection. Our technique uses a curved representation of the ray providing visual feedback of the orientation of both the input device and the selection ray. The users’ feeling is that they control a flexible ray that gets curved as it moves over a virtual friction surface defined by the 2D window. We have implemented this technique and evaluated its effectiveness in terms of accuracy and performance. Our experiments on a four-sided CAVE indicate that the proposed technique can increase the speed and accuracy of component selection in 2D GUIs immersed into 3D worlds.Peer ReviewedPostprint (author’s final draft

    A Sketch-based Rapid Modeling Method for Crime Scene Presentation

    Get PDF
    The reconstruction of crime scene plays an important role in digital forensic application. This article integrates computer graphics, sketch-based retrieval and virtual reality (VR) techniques to develop a low-cost and rapid 3D crime scene presentation approach, which can be used by investigators to analyze and simulate the criminal process. First, we constructed a collection of 3D models for indoor crime scenes using various popular techniques, including laser scanning, image-based modeling and geometric modeling. Second, to quickly obtain an object of interest from the 3D model database, a sketch-based retrieval method was proposed. Finally, a rapid modeling system that integrates our database and retrieval algorithm was developed to quickly build a digital crime scene. For practical use, an interactive real-time virtual roaming application was developed in Unity 3D and a low-cost VR head-mounted display (HMD). Practical cases have been implemented to demonstrate the feasibility and availability of our method

    Display Blocks: a Set of Cubic Displays for Tangible, Multi-Perspective Data Exploration

    Get PDF
    This paper details the design and implementation of a new type of display technology. Display Blocks are a response to two major limitations of current displays: dimensional compression and physical-digital disconnect. Each Display Block consists of six organic light emitting diode (OLED) screens, arranged in a cubic form factor. We explore the possibilities that this type of display holds for data visualization, manipulation and exploration. To this end, we accompany our design with a set of initial applications that leverage the form factor of the displays. We hope that this work shows the promise of display technologies which use their form factor as a cue to understanding their content
    • 

    corecore