895 research outputs found

    Spherical tangible user interfaces in mixed reality

    Get PDF
    The popularity of virtual reality (VR) and augmented reality (AR) has grown rapidly in recent years, both in academia and commercial applications. This is rooted in technological advances and affordable head-mounted displays (HMDs). Whether in games or professional applications, HMDs allow for immersive audio-visual experiences that transport users to compelling digital worlds or convincingly augment the real world. However, as true to life as these experiences have become in a visual and auditory sense, the question remains how we can model interaction with these virtual environments in an equally natural way. Solutions providing intuitive tangible interaction would bear the potential to fundamentally make the mixed reality (MR) spectrum more accessible, especially for novice users. Research on tangible user interfaces (TUIs) has pursued this goal by coupling virtual to real-world objects. Tangible interaction has been shown to provide significant advantages for numerous use cases. Spherical tangible user interfaces (STUIs) present a special case of these devices, mainly due to their ability to fully embody any spherical virtual content. In general, spherical devices increasingly transition from mere technology demonstrators to usable multi-modal interfaces. For this dissertation, we explore the application of STUIs in MR environments primarily by comparing them to state-of-the-art input techniques in four different contexts. Thus, investigating the questions of embodiment, overall user performance, and the ability of STUIs relying on their shape alone to support complex interaction techniques. First, we examine how spherical devices can embody immersive visualizations. In an initial study, we test the practicality of a tracked sphere embodying three kinds of visualizations. We examine simulated multi-touch interaction on a spherical surface and compare two different sphere sizes to VR controllers. Results confirmed our prototype's viability and indicate improved pattern recognition and advantages for the smaller sphere. Second, to further substantiate VR as a prototyping technology, we demonstrate how a large tangible spherical display can be simulated in VR. We show how VR can fundamentally extend the capabilities of real spherical displays by adding physical rotation to a simulated multi-touch surface. After a first study evaluating the general viability of simulating such a display in VR, our second study revealed the superiority of a rotating spherical display. Third, we present a concept for a spherical input device for tangible AR (TAR). We show how such a device can provide basic object manipulation capabilities utilizing two different modes and compare it to controller techniques with increasing hardware complexity. Our results show that our button-less sphere-based technique is only outperformed by a mode-less controller variant that uses physical buttons and a touchpad. Fourth, to study the intrinsic problem of VR locomotion, we explore two opposing approaches: a continuous and a discrete technique. For the first, we demonstrate a spherical locomotion device supporting two different locomotion paradigms that propel a user's first-person avatar accordingly. We found that a position control paradigm applied to a sphere performed mostly superior in comparison to button-supported controller interaction. For discrete locomotion, we evaluate the concept of a spherical world in miniature (SWIM) used for avatar teleportation in a large virtual environment. Results showed that users subjectively preferred the sphere-based technique over regular controllers and on average, achieved lower task times and higher accuracy. To conclude the thesis, we discuss our findings, insights, and subsequent contribution to our central research questions to derive recommendations for designing techniques based on spherical input devices and an outlook on the future development of spherical devices in the mixed reality spectrum.Die Popularität von Virtual Reality (VR) und Augmented Reality (AR) hat in den letzten Jahren rasant zugenommen, sowohl im akademischen Bereich als auch bei kommerziellen Anwendungen. Dies ist in erster Linie auf technologische Fortschritte und erschwingliche Head-Mounted Displays (HMDs) zurückzuführen. Ob in Spielen oder professionellen Anwendungen, HMDs ermöglichen immersive audiovisuelle Erfahrungen, die uns in fesselnde digitale Welten versetzen oder die reale Welt überzeugend erweitern. Doch so lebensecht diese Erfahrungen in visueller und auditiver Hinsicht geworden sind, so bleibt doch die Frage, wie die Interaktion mit diesen virtuellen Umgebungen auf ebenso natürliche Weise gestaltet werden kann. Lösungen, die eine intuitive, greifbare Interaktion ermöglichen, hätten das Potenzial, das Spektrum der Mixed Reality (MR) fundamental zugänglicher zu machen, insbesondere für Unerfahrene. Die Forschung an Tangible User Interfaces (TUIs) hat dieses Ziel durch das Koppeln virtueller und realer Objekte verfolgt und so hat sich gezeigt, dass greifbare Interaktion für zahlreiche Anwendungsfälle signifikante Vorteile bietet. Spherical Tangible User Interfaces (STUIs) stellen einen Spezialfall von greifbaren Interfaces dar, insbesondere aufgrund ihrer Fähigkeit, beliebige sphärische virtuelle Inhalte vollständig verkörpern zu können. Generell entwickeln sich sphärische Geräte zunehmend von reinen Technologiedemonstratoren zu nutzbaren multimodalen Instrumenten, die auf eine breite Palette von Interaktionstechniken zurückgreifen können. Diese Dissertation untersucht primär die Anwendung von STUIs in MR-Umgebungen durch einen Vergleich mit State-of-the-Art-Eingabetechniken in vier verschiedenen Kontexten. Dies ermöglicht die Erforschung der Bedeutung der Verkörperung virtueller Objekte, der Benutzerleistung im Allgemeinen und der Fähigkeit von STUIs, die sich lediglich auf ihre Form verlassen, komplexe Interaktionstechniken zu unterstützen. Zunächst erforschen wir, wie sphärische Geräte immersive Visualisierungen verkörpern können. Eine erste Studie ergründet die Praxistauglichkeit einer einfach konstruierten, getrackten Kugel, die drei Arten von Visualisierungen verkörpert. Wir testen simulierte Multi-Touch-Interaktion auf einer sphärischen Oberfläche und vergleichen zwei Kugelgrößen mit VR-Controllern. Die Ergebnisse bestätigten die Praxistauglichkeit des Prototyps und deuten auf verbesserte Mustererkennung sowie Vorteile für die kleinere Kugel hin. Zweitens, um die Validität von VR als Prototyping-Technologie zu bekräftigen, demonstrieren wir, wie ein großes, anfassbares sphärisches Display in VR simuliert werden kann. Es zeigt sich, wie VR die Möglichkeiten realer sphärischer Displays substantiell erweitern kann, indem eine simulierte Multi-Touch-Oberfläche um die Fähigkeit der physischen Rotation ergänzt wird. Nach einer ersten Studie, die die generelle Machbarkeit der Simulation eines solchen Displays in VR evaluiert, zeigte eine zweite Studie die Überlegenheit des drehbaren sphärischen Displays. Drittens präsentiert diese Arbeit ein Konzept für ein sphärisches Eingabegerät für Tangible AR (TAR). Wir zeigen, wie ein solches Werkzeug grundlegende Fähigkeiten zur Objektmanipulation unter Verwendung von zwei verschiedenen Modi bereitstellen kann und vergleichen es mit Eingabetechniken deren Hardwarekomplexität zunehmend steigt. Unsere Ergebnisse zeigen, dass die kugelbasierte Technik, die ohne Knöpfe auskommt, nur von einer Controller-Variante übertroffen wird, die physische Knöpfe und ein Touchpad verwendet und somit nicht auf unterschiedliche Modi angewiesen ist. Viertens, um das intrinsische Problem der Fortbewegung in VR zu erforschen, untersuchen wir zwei gegensätzliche Ansätze: eine kontinuierliche und eine diskrete Technik. Für die erste präsentieren wir ein sphärisches Eingabegerät zur Fortbewegung, das zwei verschiedene Paradigmen unterstützt, die einen First-Person-Avatar entsprechend bewegen. Es zeigte sich, dass das Paradigma der direkten Positionssteuerung, angewandt auf einen Kugel-Controller, im Vergleich zu regulärer Controller-Interaktion, die zusätzlich auf physische Knöpfe zurückgreifen kann, meist besser abschneidet. Im Bereich der diskreten Fortbewegung evaluieren wir das Konzept einer kugelförmingen Miniaturwelt (Spherical World in Miniature, SWIM), die für die Avatar-Teleportation in einer großen virtuellen Umgebung verwendet werden kann. Die Ergebnisse zeigten eine subjektive Bevorzugung der kugelbasierten Technik im Vergleich zu regulären Controllern und im Durchschnitt eine schnellere Lösung der Aufgaben sowie eine höhere Genauigkeit. Zum Abschluss der Arbeit diskutieren wir unsere Ergebnisse, Erkenntnisse und die daraus resultierenden Beiträge zu unseren zentralen Forschungsfragen, um daraus Empfehlungen für die Gestaltung von Techniken auf Basis kugelförmiger Eingabegeräte und einen Ausblick auf die mögliche zukünftige Entwicklung sphärischer Eingabegräte im Mixed-Reality-Bereich abzuleiten

    Virtual reality obstacle crossing: adaptation, retention and transfer to the physical world

    Get PDF
    Virtual reality (VR) paradigms are increasingly being used in movement and exercise sciences with the aim to enhance motor function and stimulate motor adaptation in healthy and pathological conditions. Locomotor training based in VR may be promising for motor skill learning, with transfer of VR skills to the physical world in turn required to benefit functional activities of daily life. This PhD project aims to examine locomotor adaptations to repeated VR obstacle crossing in healthy young adults as well as transfers to the untrained limb and the physical world, and retention potential of the learned skills. For these reasons, the current thesis comprises three studies using controlled VR obstacle crossing interventions during treadmill walking. In the first and second studies we investigated adaptation to crossing unexpectedly appearing virtual obstacles, with and without feedback about crossing performance, and its transfer to the untrained leg. In the third study we investigated transfer of virtual obstacle crossing to physical obstacles of similar size to the virtual ones, that appeared at the same time point within the gait cycle. We also investigated whether the learned skills can be retained in each of the environments over one week. In all studies participants were asked to walk on a treadmill while wearing a VR headset that represented their body as an avatar via real-time synchronised optical motion capture. Participants had to cross virtual and/or physical obstacles with and without feedback about their crossing performance. If applicable, feedback was provided based on motion capture immediately after virtual obstacle crossing. Toe clearance, margin of stability, and lower extremity joint angles in the sagittal plane were calculated for the crossing legs to analyse adaptation, transfer, and retention of obstacle crossing performance. The main outcomes of the first and second studies were that crossing multiple virtual obstacles increased participants’ dynamic stability and led to a nonlinear adaptation of toe clearance that was enhanced by visual feedback about crossing performance. However, independent of the use of feedback, no transfer to the untrained leg was detected. Moreover, despite significant and rapid adaptive changes in locomotor kinematics with repeated VR obstacle crossing, results of the third study revealed limited transfer of learned skills from virtual to physical obstacles. Lastly, despite full retention over one week in the virtual environment we found only partial retention when crossing a physical obstacle while walking on the treadmill. In summary, the findings of this PhD project confirmed that repeated VR obstacle perturbations can effectively stimulate locomotor skill adaptations. However, these are not transferable to the untrained limb irrespective of enhanced awareness and feedback. Moreover, the current data provide evidence that, despite significant adaptive changes in locomotion kinematics with repeated practice of obstacle crossing under VR conditions, transfer to and retention in the physical environment is limited. It may be that perception-action coupling in the virtual environment, and thus sensorimotor coordination, differs from the physical world, potentially inhibiting retained transfer between those two conditions. Accordingly, VR-based locomotor skill training paradigms need to be considered carefully if they are to replace training in the physical world

    An Introduction to 3D User Interface Design

    Get PDF
    3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article

    Harmonize: a shared environment for extended immersive entertainment

    Get PDF
    Virtual reality (VR) and augmented reality (AR) applications are very diffuse nowadays. Moreover, recent technology innovations led to the diffusion of commercial head-mounted displays (HMDs) for immersive VR: users can enjoy entertainment activities that fill their visual fields, experiencing the sensation of physical presence in these virtual immersive environments (IEs). Even if AR and VR are mostly used separately, they can be effectively combined to provide a multi-user shared environment (SE), where two or more users perform some specific tasks in a cooperative or competitive way, providing a wider set of interactions and use cases compared to immersive VR alone. However, due to the differences between the two technologies, it is difficult to develop SEs offering a similar experience for both AR and VR users. This paper presents Harmonize, a novel framework to deploy applications based on SEs with a comparable experience for both AR and VR users. Moreover, the framework is hardware-independent and it has been designed to be as much extendable to novel hardware as possible. An immersive game has been designed to test and to evaluate the validity of the proposed framework. The assessment of the system through the System Usability Scale (SUS) questionnaire and the Game Experience Questionnaire (GEQ) shows a positive evaluation

    Multimodal teaching, learning and training in virtual reality: a review and case study

    Get PDF
    It is becoming increasingly prevalent in digital learning research to encompass an array of different meanings, spaces, processes, and teaching strategies for discerning a global perspective on constructing the student learning experience. Multimodality is an emergent phenomenon that may influence how digital learning is designed, especially when employed in highly interactive and immersive learning environments such as Virtual Reality (VR). VR environments may aid students' efforts to be active learners through consciously attending to, and reflecting on, critique leveraging reflexivity and novel meaning-making most likely to lead to a conceptual change. This paper employs eleven industrial case-studies to highlight the application of multimodal VR-based teaching and training as a pedagogically rich strategy that may be designed, mapped and visualized through distinct VR-design elements and features. The outcomes of the use cases contribute to discern in-VR multimodal teaching as an emerging discourse that couples system design-based paradigms with embodied, situated and reflective praxis in spatial, emotional and temporal VR learning environments

    Modulating the performance of VR navigation tasks using different methods of presenting visual information

    Get PDF
    Spatial navigation is an essential ability in our daily lives that we use to move through different locations. In Virtual Reality (VR), the environments that users navigate may be large and similar to real world places. It is usually desirable to guide users in order to prevent them from getting lost and to make it easier for them to reach the goal or discover important spots in the environment. However, doing so in a way that the guidance is not intrusive, breaking the immersion and sense of presence, nor too hard to notice, therefore not being useful, can be a challenge. In this work we conducted an experiment in which we adapted a probabilistic learning paradigm: the Weather Prediction task to spatial navigation in VR. Subjects navigated one of the two versions of procedurally generated T-junction mazes in Virtual Reality. In one version, the environment contained visual cues in the form of street signs whose presence predicted the correct turning direction. In the other version the cues were present, but were not predictive. Results showed that when subjects navigated the mazes with the predictive cues they made less mistakes, and therefore the cues helped them navigate the environments. A comparison with previous Neuroscience literature revealed that the strategies used by subjects to solve the task were different than in the original 2D experiment. This work is intended to be used as a basis to further improve spatial navigation in VR with more immersive and implicit methods, and as another example of how the Cognitive Neurosicence and Virtual Reality research fields can greatly benefit each other

    Multimodality in VR: A survey

    Get PDF
    Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we review the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer

    Expanding the usable workspace of a haptic device by placing it on a moving base

    Get PDF
    The goal of this research is to expand the reachable workspace of a haptic device when used in a projection screen virtual environment. The proposed method includes supplementing the haptic device with a redundant degree of freedom to provide motion of the base. The key research challenge is to develop controls for the mobile base that will keep the haptic end-effector in the usable haptic workspace at all times. An experimental set up consisting of an Omni haptic device and a XY motorized table was used in the development of the control algorithms. Tests were conducted which demonstrate that the force felt by the user when touching a virtual wall remains constant even when the mobile base is moving to re-center the haptic device in the usable haptic workspace

    Improving spatial orientation in virtual reality with leaning-based interfaces

    Get PDF
    Advancement in technology has made Virtual Reality (VR) increasingly portable, affordable and accessible to a broad audience. However, large scale VR locomotion still faces major challenges in the form of spatial disorientation and motion sickness. While spatial updating is automatic and even obligatory in real world walking, using VR controllers to travel can cause disorientation. This dissertation presents two experiments that explore ways of improving spatial updating and spatial orientation in VR locomotion while minimizing cybersickness. In the first study, we compared a hand-held controller with HeadJoystick, a leaning-based interface, in a 3D navigational search task. The results showed that leaning-based interface helped participant spatially update more effectively than when using the controller. In the second study, we designed a "HyperJump" locomotion paradigm which allows to travel faster while limiting its optical flow. Not having any optical flow (as in traditional teleport paradigms) has been shown to help reduce cybersickness, but can also cause disorientation. By interlacing continuous locomotion with teleportation we showed that user can travel faster without compromising spatial updating
    corecore