149 research outputs found

    Software techniques for improving head mounted displays to create comfortable user experiences in virtual reality

    Get PDF
    Head Mounted Displays (HMDs) allow users to experience Virtual Reality (VR) with a great level of immersion. Advancements in hardware technologies have led to a reduction in cost of producing good quality VR HMDs bringing them out from research labs to consumer markets. However, the current generation of HMDs suffer from a few fundamental problems that can deter their widespread adoption. For this thesis, we explored two techniques to overcome some of the challenges of experiencing VR when using HMDs. When experiencing VR with HMDs strapped to your head, even simple physical tasks like drinking a beverage can be difficult and awkward. We explored mixed reality renderings that selectively incorporate the physical world into the virtual world for interactions with physical objects. We conducted a user study comparing four rendering techniques that balance immersion in the virtual world with ease of interaction with the physical world. Users of VR systems often experience vection, the perception of self-motion in the absence of any physical movement. While vection helps to improve presence in VR, it often leads to a form of motion sickness called cybersickness. Prior work has discovered that changing vection (changing the perceived speed or moving direction) causes more severe cybersickness than steady vection (walking at a constant speed or in a constant direction). Based on this idea, we tried to reduce cybersickness caused by character movements in a First Person Shooter (FPS) game in VR. We propose Rotation Blurring (RB), uniformly blurring the screen during rotational movements to reduce cybersickness. We performed a user study to evaluate the impact of RB in reducing cybersickness and found that RB led to an overall reduction in sickness levels of the participants and delayed its onset. Participants who experienced acute levels of cybersickness benefited significantly from this technique

    Spherical tangible user interfaces in mixed reality

    Get PDF
    The popularity of virtual reality (VR) and augmented reality (AR) has grown rapidly in recent years, both in academia and commercial applications. This is rooted in technological advances and affordable head-mounted displays (HMDs). Whether in games or professional applications, HMDs allow for immersive audio-visual experiences that transport users to compelling digital worlds or convincingly augment the real world. However, as true to life as these experiences have become in a visual and auditory sense, the question remains how we can model interaction with these virtual environments in an equally natural way. Solutions providing intuitive tangible interaction would bear the potential to fundamentally make the mixed reality (MR) spectrum more accessible, especially for novice users. Research on tangible user interfaces (TUIs) has pursued this goal by coupling virtual to real-world objects. Tangible interaction has been shown to provide significant advantages for numerous use cases. Spherical tangible user interfaces (STUIs) present a special case of these devices, mainly due to their ability to fully embody any spherical virtual content. In general, spherical devices increasingly transition from mere technology demonstrators to usable multi-modal interfaces. For this dissertation, we explore the application of STUIs in MR environments primarily by comparing them to state-of-the-art input techniques in four different contexts. Thus, investigating the questions of embodiment, overall user performance, and the ability of STUIs relying on their shape alone to support complex interaction techniques. First, we examine how spherical devices can embody immersive visualizations. In an initial study, we test the practicality of a tracked sphere embodying three kinds of visualizations. We examine simulated multi-touch interaction on a spherical surface and compare two different sphere sizes to VR controllers. Results confirmed our prototype's viability and indicate improved pattern recognition and advantages for the smaller sphere. Second, to further substantiate VR as a prototyping technology, we demonstrate how a large tangible spherical display can be simulated in VR. We show how VR can fundamentally extend the capabilities of real spherical displays by adding physical rotation to a simulated multi-touch surface. After a first study evaluating the general viability of simulating such a display in VR, our second study revealed the superiority of a rotating spherical display. Third, we present a concept for a spherical input device for tangible AR (TAR). We show how such a device can provide basic object manipulation capabilities utilizing two different modes and compare it to controller techniques with increasing hardware complexity. Our results show that our button-less sphere-based technique is only outperformed by a mode-less controller variant that uses physical buttons and a touchpad. Fourth, to study the intrinsic problem of VR locomotion, we explore two opposing approaches: a continuous and a discrete technique. For the first, we demonstrate a spherical locomotion device supporting two different locomotion paradigms that propel a user's first-person avatar accordingly. We found that a position control paradigm applied to a sphere performed mostly superior in comparison to button-supported controller interaction. For discrete locomotion, we evaluate the concept of a spherical world in miniature (SWIM) used for avatar teleportation in a large virtual environment. Results showed that users subjectively preferred the sphere-based technique over regular controllers and on average, achieved lower task times and higher accuracy. To conclude the thesis, we discuss our findings, insights, and subsequent contribution to our central research questions to derive recommendations for designing techniques based on spherical input devices and an outlook on the future development of spherical devices in the mixed reality spectrum.Die PopularitĂ€t von Virtual Reality (VR) und Augmented Reality (AR) hat in den letzten Jahren rasant zugenommen, sowohl im akademischen Bereich als auch bei kommerziellen Anwendungen. Dies ist in erster Linie auf technologische Fortschritte und erschwingliche Head-Mounted Displays (HMDs) zurĂŒckzufĂŒhren. Ob in Spielen oder professionellen Anwendungen, HMDs ermöglichen immersive audiovisuelle Erfahrungen, die uns in fesselnde digitale Welten versetzen oder die reale Welt ĂŒberzeugend erweitern. Doch so lebensecht diese Erfahrungen in visueller und auditiver Hinsicht geworden sind, so bleibt doch die Frage, wie die Interaktion mit diesen virtuellen Umgebungen auf ebenso natĂŒrliche Weise gestaltet werden kann. Lösungen, die eine intuitive, greifbare Interaktion ermöglichen, hĂ€tten das Potenzial, das Spektrum der Mixed Reality (MR) fundamental zugĂ€nglicher zu machen, insbesondere fĂŒr Unerfahrene. Die Forschung an Tangible User Interfaces (TUIs) hat dieses Ziel durch das Koppeln virtueller und realer Objekte verfolgt und so hat sich gezeigt, dass greifbare Interaktion fĂŒr zahlreiche AnwendungsfĂ€lle signifikante Vorteile bietet. Spherical Tangible User Interfaces (STUIs) stellen einen Spezialfall von greifbaren Interfaces dar, insbesondere aufgrund ihrer FĂ€higkeit, beliebige sphĂ€rische virtuelle Inhalte vollstĂ€ndig verkörpern zu können. Generell entwickeln sich sphĂ€rische GerĂ€te zunehmend von reinen Technologiedemonstratoren zu nutzbaren multimodalen Instrumenten, die auf eine breite Palette von Interaktionstechniken zurĂŒckgreifen können. Diese Dissertation untersucht primĂ€r die Anwendung von STUIs in MR-Umgebungen durch einen Vergleich mit State-of-the-Art-Eingabetechniken in vier verschiedenen Kontexten. Dies ermöglicht die Erforschung der Bedeutung der Verkörperung virtueller Objekte, der Benutzerleistung im Allgemeinen und der FĂ€higkeit von STUIs, die sich lediglich auf ihre Form verlassen, komplexe Interaktionstechniken zu unterstĂŒtzen. ZunĂ€chst erforschen wir, wie sphĂ€rische GerĂ€te immersive Visualisierungen verkörpern können. Eine erste Studie ergrĂŒndet die Praxistauglichkeit einer einfach konstruierten, getrackten Kugel, die drei Arten von Visualisierungen verkörpert. Wir testen simulierte Multi-Touch-Interaktion auf einer sphĂ€rischen OberflĂ€che und vergleichen zwei KugelgrĂ¶ĂŸen mit VR-Controllern. Die Ergebnisse bestĂ€tigten die Praxistauglichkeit des Prototyps und deuten auf verbesserte Mustererkennung sowie Vorteile fĂŒr die kleinere Kugel hin. Zweitens, um die ValiditĂ€t von VR als Prototyping-Technologie zu bekrĂ€ftigen, demonstrieren wir, wie ein großes, anfassbares sphĂ€risches Display in VR simuliert werden kann. Es zeigt sich, wie VR die Möglichkeiten realer sphĂ€rischer Displays substantiell erweitern kann, indem eine simulierte Multi-Touch-OberflĂ€che um die FĂ€higkeit der physischen Rotation ergĂ€nzt wird. Nach einer ersten Studie, die die generelle Machbarkeit der Simulation eines solchen Displays in VR evaluiert, zeigte eine zweite Studie die Überlegenheit des drehbaren sphĂ€rischen Displays. Drittens prĂ€sentiert diese Arbeit ein Konzept fĂŒr ein sphĂ€risches EingabegerĂ€t fĂŒr Tangible AR (TAR). Wir zeigen, wie ein solches Werkzeug grundlegende FĂ€higkeiten zur Objektmanipulation unter Verwendung von zwei verschiedenen Modi bereitstellen kann und vergleichen es mit Eingabetechniken deren HardwarekomplexitĂ€t zunehmend steigt. Unsere Ergebnisse zeigen, dass die kugelbasierte Technik, die ohne Knöpfe auskommt, nur von einer Controller-Variante ĂŒbertroffen wird, die physische Knöpfe und ein Touchpad verwendet und somit nicht auf unterschiedliche Modi angewiesen ist. Viertens, um das intrinsische Problem der Fortbewegung in VR zu erforschen, untersuchen wir zwei gegensĂ€tzliche AnsĂ€tze: eine kontinuierliche und eine diskrete Technik. FĂŒr die erste prĂ€sentieren wir ein sphĂ€risches EingabegerĂ€t zur Fortbewegung, das zwei verschiedene Paradigmen unterstĂŒtzt, die einen First-Person-Avatar entsprechend bewegen. Es zeigte sich, dass das Paradigma der direkten Positionssteuerung, angewandt auf einen Kugel-Controller, im Vergleich zu regulĂ€rer Controller-Interaktion, die zusĂ€tzlich auf physische Knöpfe zurĂŒckgreifen kann, meist besser abschneidet. Im Bereich der diskreten Fortbewegung evaluieren wir das Konzept einer kugelförmingen Miniaturwelt (Spherical World in Miniature, SWIM), die fĂŒr die Avatar-Teleportation in einer großen virtuellen Umgebung verwendet werden kann. Die Ergebnisse zeigten eine subjektive Bevorzugung der kugelbasierten Technik im Vergleich zu regulĂ€ren Controllern und im Durchschnitt eine schnellere Lösung der Aufgaben sowie eine höhere Genauigkeit. Zum Abschluss der Arbeit diskutieren wir unsere Ergebnisse, Erkenntnisse und die daraus resultierenden BeitrĂ€ge zu unseren zentralen Forschungsfragen, um daraus Empfehlungen fĂŒr die Gestaltung von Techniken auf Basis kugelförmiger EingabegerĂ€te und einen Ausblick auf die mögliche zukĂŒnftige Entwicklung sphĂ€rischer EingabegrĂ€te im Mixed-Reality-Bereich abzuleiten

    Clinical Decision Support Systems with Game-based Environments, Monitoring Symptoms of Parkinson’s Disease with Exergames

    Get PDF
    Parkinson’s Disease (PD) is a malady caused by progressive neuronal degeneration, deriving in several physical and cognitive symptoms that worsen with time. Like many other chronic diseases, it requires constant monitoring to perform medication and therapeutic adjustments. This is due to the significant variability in PD symptomatology and progress between patients. At the moment, this monitoring requires substantial participation from caregivers and numerous clinic visits. Personal diaries and questionnaires are used as data sources for medication and therapeutic adjustments. The subjectivity in these data sources leads to suboptimal clinical decisions. Therefore, more objective data sources are required to better monitor the progress of individual PD patients. A potential contribution towards more objective monitoring of PD is clinical decision support systems. These systems employ sensors and classification techniques to provide caregivers with objective information for their decision-making. This leads to more objective assessments of patient improvement or deterioration, resulting in better adjusted medication and therapeutic plans. Hereby, the need to encourage patients to actively and regularly provide data for remote monitoring remains a significant challenge. To address this challenge, the goal of this thesis is to combine clinical decision support systems with game-based environments. More specifically, serious games in the form of exergames, active video games that involve physical exercise, shall be used to deliver objective data for PD monitoring and therapy. Exergames increase engagement while combining physical and cognitive tasks. This combination, known as dual-tasking, has been proven to improve rehabilitation outcomes in PD: recent randomized clinical trials on exergame-based rehabilitation in PD show improvements in clinical outcomes that are equal or superior to those of traditional rehabilitation. In this thesis, we present an exergame-based clinical decision support system model to monitor symptoms of PD. This model provides both objective information on PD symptoms and an engaging environment for the patients. The model is elaborated, prototypically implemented and validated in the context of two of the most prominent symptoms of PD: (1) balance and gait, as well as (2) hand tremor and slowness of movement (bradykinesia). While balance and gait affections increase the risk of falling, hand tremors and bradykinesia affect hand dexterity. We employ Wii Balance Boards and Leap Motion sensors, and digitalize aspects of current clinical standards used to assess PD symptoms. In addition, we present two dual-tasking exergames: PDDanceCity for balance and gait, and PDPuzzleTable for tremor and bradykinesia. We evaluate the capability of our system for assessing the risk of falling and the severity of tremor in comparison with clinical standards. We also explore the statistical significance and effect size of the data we collect from PD patients and healthy controls. We demonstrate that the presented approach can predict an increased risk of falling and estimate tremor severity. Also, the target population shows a good acceptance of PDDanceCity and PDPuzzleTable. In summary, our results indicate a clear feasibility to implement this system for PD. Nevertheless, long-term randomized clinical trials are required to evaluate the potential of PDDanceCity and PDPuzzleTable for physical and cognitive rehabilitation effects

    Eyes on teleporting: comparing locomotion techniques in Virtual Reality with respect to presence, sickness and spatial orientation

    Get PDF
    This work compares three locomotion techniques for an immersive VR environment: two different types of teleporting (with and without animation) and a manual (joystick-based) technique. We tested the effect of these techniques on visual motion sickness, spatial awareness, presence, subjective pleasantness, and perceived difficulty of operating the navigation. We collected eye tracking and head and body orientation data to investigate the relationships between motion, vection, and sickness. Our study confirms some results already discussed in the literature regarding the reduced invasiveness and the high usability of instant teleport while increasing the evidence against the hypothesis of reduced spatial awareness induced by this technique. We reinforce the evidence about the issues of extending teleporting with animation. Furthermore, we offer some new evidence of a benefit to the user experience of the manual technique and the correlation of the sickness felt in this condition with head movements. The findings of this study contribute to the ongoing debate on the development of guidelines on navigation interfaces in specific VR environments

    The Effects of Primary and Secondary Task Workloads on Cybersickness in Immersive Virtual Active Exploration Experiences

    Get PDF
    Virtual reality (VR) technology promises to transform humanity. The technology enables users to explore and interact with computer-generated environments that can be simulated to approximate or deviate from reality. This creates an endless number of ways to propitiously apply the technology in our lives. It follows that large technological conglomerates are pushing for the widespread adoption of VR, financing the creation of the Metaverse - a hypothetical representation of the next iteration of the internet. Even with VR technology\u27s continuous growth, its widespread adoption remains long overdue. This can largely be attributed to an affliction called cybersickness, an analog to motion sickness, which often manifests in users as an undesirable side-effect of VR experiences, inhibiting its sustained usage. This makes it highly important to study factors related to the malady. The tasks performed in a simulated environment provide context, purpose, and meaning to the experience. Active exploration experiences afford users control over their motion, primarily allowing them to navigate through an environment. While navigating, users may also have to engage in secondary tasks that can be distracting. These navigation and distraction tasks differ in terms of the source and magnitude of attentional demands involved, potentially influencing how cyber-sickening a simulation can be. Given the sparse literature in this area, this dissertation sets out to investigate how the interplay between these factors impacts the onset and severity of sickness, thereby contributing to the knowledge base on how the attentional demands associated with the tasks performed during navigation affect cybersickness in virtual reality

    The Perception/Action loop: A Study on the Bandwidth of Human Perception and on Natural Human Computer Interaction for Immersive Virtual Reality Applications

    Get PDF
    Virtual Reality (VR) is an innovating technology which, in the last decade, has had a widespread success, mainly thanks to the release of low cost devices, which have contributed to the diversification of its domains of application. In particular, the current work mainly focuses on the general mechanisms underling perception/action loop in VR, in order to improve the design and implementation of applications for training and simulation in immersive VR, especially in the context of Industry 4.0 and the medical field. On the one hand, we want to understand how humans gather and process all the information presented in a virtual environment, through the evaluation of the visual system bandwidth. On the other hand, since interface has to be a sort of transparent layer allowing trainees to accomplish a task without directing any cognitive effort on the interaction itself, we compare two state of the art solutions for selection and manipulation tasks, a touchful one, the HTC Vive controllers, and a touchless vision-based one, the Leap Motion. To this aim we have developed ad hoc frameworks and methodologies. The software frameworks consist in the creation of VR scenarios, where the experimenter can choose the modality of interaction and the headset to be used and set experimental parameters, guaranteeing experiments repeatability and controlled conditions. The methodology includes the evaluation of performance, user experience and preferences, considering both quantitative and qualitative metrics derived from the collection and the analysis of heterogeneous data, as physiological and inertial sensors measurements, timing and self-assessment questionnaires. In general, VR has been found to be a powerful tool able to simulate specific situations in a realistic and involving way, eliciting user\u2019s sense of presence, without causing severe cybersickness, at least when interaction is limited to the peripersonal and near-action space. Moreover, when designing a VR application, it is possible to manipulate its features in order to trigger or avoid triggering specific emotions and voluntarily create potentially stressful or relaxing situations. Considering the ability of trainees to perceive and process information presented in an immersive virtual environment, results show that, when people are given enough time to build a gist of the scene, they are able to recognize a change with 0.75 accuracy when up to 8 elements are in the scene. For interaction, instead, when selection and manipulation tasks do not require fine movements, controllers and Leap Motion ensure comparable performance; whereas, when tasks are complex, the first solution turns out to be more stable and efficient, also because visual and audio feedback, provided as a substitute of the haptic one, does not substantially contribute to improve performance in the touchless case

    Identifying strategies to mitigate cybersickness in virtual reality induced by flying with an interactive travel interface.

    Get PDF
    Virtual Reality (VR) is a versatile and evolving technology for simulating different experiences. As this technology has improved in hardware, accessibility of development, and availability of applications its interest has surged. However, despite these improvements, the problem of Cybersickness (CS) remains, causing a variety of uncomfortable symptoms in users. Hence the need for guidelines that developers can use to create experiences that mitigate these effects. With an incomplete understanding of CS and techniques yet to be tried, this thesis seeks to identify new strategies that mitigate CS. In the literature, the predominant theories attribute CS or closely related sicknesses to the body rejecting inconsistencies between senses and the body failing to adapt to conflicts or new dynamics in an experience. There are also a variety of user, hardware, and software factors that have been reported to affect it. To measure the extent of CS, the Simulator Sickness Questionnaire (SSQ) is the most commonly used tool. Some physiological responses have also been associated with CS that can be measured in real-time. Three hypotheses for mitigation strategies were devised and tested in an experiment. This involved a physical travel interface for flying through a Virtual Environment (VE) populated with models as a control condition. On top of this, three manipulation conditions referred to as Gaze-Tracking Vignette (GV), Personal Embodiment (PE), and Fans and Vibration (FV) could be individually applied. The experiment was designed to be between-subjects, with participants randomly allocated to four groups. Overall, 37 participants did the experiment with Heart Rate (HR), eye-tracking data, and flight data recorded. Post-exposure, they also filled out a survey that included the SSQ. To analyse the data, statistical tests and regression models were used. These found significant evidence that a vignette that changes intensity with speed and scope position with eye-gaze direction made CS worse. The same result was found from adding personal embodiment with hand tracking. Evidence was also found from the SSQ that directional fans with floor vibration did not cause a difference. However, an overall lowering of HR for this condition indicated that it might help, but could be due to other factors. Additionally, comments from participants identified that many experienced symptoms consistent with CS, with dizziness as the most common, and some issues with the usability of the travel interface
    • 

    corecore