209 research outputs found

    Perceptual Requirements for World-Locked Rendering in AR and VR

    Full text link
    Stereoscopic, head-tracked display systems can show users realistic, world-locked virtual objects and environments. However, discrepancies between the rendering pipeline and physical viewing conditions can lead to perceived instability in the rendered content resulting in reduced realism, immersion, and, potentially, visually-induced motion sickness. The requirements to achieve perceptually stable world-locked rendering are unknown due to the challenge of constructing a wide field of view, distortion-free display with highly accurate head- and eye-tracking. In this work we introduce new hardware and software built upon recently introduced hardware and present a system capable of rendering virtual objects over real-world references without perceivable drift under such constraints. The platform is used to study acceptable errors in render camera position for world-locked rendering in augmented and virtual reality scenarios, where we find an order of magnitude difference in perceptual sensitivity between them. We conclude by comparing study results with an analytic model which examines changes to apparent depth and visual heading in response to camera displacement errors. We identify visual heading as an important consideration for world-locked rendering alongside depth errors from incorrect disparity

    Towards System Agnostic Calibration of Optical See-Through Head-Mounted Displays for Augmented Reality

    Get PDF
    This dissertation examines the developments and progress of spatial calibration procedures for Optical See-Through (OST) Head-Mounted Display (HMD) devices for visual Augmented Reality (AR) applications. Rapid developments in commercial AR systems have created an explosion of OST device options for not only research and industrial purposes, but also the consumer market as well. This expansion in hardware availability is equally matched by a need for intuitive standardized calibration procedures that are not only easily completed by novice users, but which are also readily applicable across the largest range of hardware options. This demand for robust uniform calibration schemes is the driving motive behind the original contributions offered within this work. A review of prior surveys and canonical description for AR and OST display developments is provided before narrowing the contextual scope to the research questions evolving within the calibration domain. Both established and state of the art calibration techniques and their general implementations are explored, along with prior user study assessments and the prevailing evaluation metrics and practices employed within. The original contributions begin with a user study evaluation comparing and contrasting the accuracy and precision of an established manual calibration method against a state of the art semi-automatic technique. This is the first formal evaluation of any non-manual approach and provides insight into the current usability limitations of present techniques and the complexities of next generation methods yet to be solved. The second study investigates the viability of a user-centric approach to OST HMD calibration through novel adaptation of manual calibration to consumer level hardware. Additional contributions describe the development of a complete demonstration application incorporating user-centric methods, a novel strategy for visualizing both calibration results and registration error from the user’s perspective, as well as a robust intuitive presentation style for binocular manual calibration. The final study provides further investigation into the accuracy differences observed between user-centric and environment-centric methodologies. The dissertation concludes with a summarization of the contribution outcomes and their impact on existing AR systems and research endeavors, as well as a short look ahead into future extensions and paths that continued calibration research should explore

    A comprehensive method to design and assess mixed reality simulations

    Get PDF
    AbstractThe scientific literature highlights how Mixed Reality (MR) simulations allow obtaining several benefits in healthcare education. Simulation-based training, boosted by MR, offers an exciting and immersive learning experience that helps health professionals to acquire knowledge and skills, without exposing patients to unnecessary risks. High engagement, informational overload, and unfamiliarity with virtual elements could expose students to cognitive overload and acute stress. The implementation of effective simulation design strategies able to preserve the psychological safety of learners and the investigation of the impacts and effects of simulations are two open challenges to be faced. In this context, the present study proposes a method to design a medical simulation and evaluate its effectiveness, with the final aim to achieve the learning outcomes and do not compromise the students' psychological safety. The method has been applied in the design and development of an MR application to simulate the rachicentesis procedure for diagnostic purposes in adults. The MR application has been tested by involving twenty students of the 6th year of Medicine and Surgery of Università Politecnica delle Marche. Multiple measurement techniques such as self-report, physiological indices, and observer ratings of performance, cognitive and emotional states of learners have been implemented to improve the rigour of the study. Also, a user-experience analysis has been accomplished to discriminate between two different devices: Vox Gear Plus® and Microsoft Hololens®. To compare the results with a reference, students performed the simulation also without using the MR application. The use of MR resulted in increased stress measured by physiological parameters without a high increase in perceived workload. It satisfies the objective to enhance the realism of the simulation without generating cognitive overload, which favours productive learning. The user experience (UX) has found greater benefits in involvement, immersion, and realism; however, it has emphasized the technological limitations of devices such as obstruction, loss of depth (Vox Gear Plus), and narrow FOV (Microsoft Hololens)

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen. Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme. In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können

    Head-mounted display-based application for cognitive training

    Get PDF
    Virtual Reality (VR) has had significant advances in rehabilitation, due to the gamification of cognitive activities that facilitate treatment. On the other hand, Immersive Virtual Reality (IVR) produces outstanding results due to the interactive features with the user. This work introduces a VR application for memory rehabilitation by walking through a maze and using the Oculus Go head-mounted display (HMD) technology. The mechanics of the game require memorizing geometric shapes while the player progresses in two modes, autonomous or manual, with two levels of difficulty depending on the number of elements to remember. The application is developed in the Unity 3D video game engine considering the optimization of computational resources to improve the performance in the processing and maintaining adequate benefits for the user, while the generated data is stored and sent to a remote server. The maze task was assessed with 29 subjects in a controlled environment. The obtained results show a significant correlation between participants’ response accuracy in both the maze task and a face–pair test. Thus, the proposed task is able to perform memory assessments

    Anxiety activating virtual environments for investigating social phobias

    Get PDF
    Social phobia has become one of the commonest manifestations of fear in any society. This fear is often accompanied by major depression or social disabilities. With the awareness that fear can be aggravated in social situations, virtual reality researchers and psychologists have investigated the feasibility of a virtual reality system as a psychotherapeutic intervention to combat social phobia. Virtual reality technology has rapidly improved over the past few years, making for better interactions. Nevertheless, the field of virtual reality exposure therapy for social phobia is still in its infancy and various issues have yet to be resolved or event uncovered. The key concept of virtual reality exposure therapy in the treatment of social phobia is based on its characteristic of perceptual illusion - the sense of presence - as an anxiety-activating system, instead of conventional imaginal or in-vivo exposure techniques. Therefore, in order to provoke a significant level of anxiety in virtual environments, it is very important to understand the impact of perceptual presence factors in virtual reality exposure therapy. Hence, this research mainly aims to investigate all the aspects of the correlation between anxiety and the components of the virtual environment in a computer-generated social simulation. By understanding this, this thesis aims to provide a framework for the construction of effective virtual reality exposure therapy for social phobia care which enables anxiety stimuli to be controlled in a gradual manner as a conventional clinical approach. This thesis presents a series of experimental studies that have been conducted with a common theme: the function of 3D inhabitants and visual apparatus in anxiety-activating virtual social simulation, a job-interview. However, each study is conducted using different research objectives. The experimental results are presented in this thesis, with psycho-physiological approach, revealing a variation of the distribution of participants' anxiety states across various VR conditions. The overall conclusion of this research is that an appropriate realism of VR stimuli is essential in sustaining the state of anxiety over the course of VR exposure. The high fidelity of virtual environment generally provoke a greater degree of anxiety, but this research also shows that aspects of VR fidelity is more related to the mental representation of individuals to the context of the stressful situation rather than any technology that is being used

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems

    Display Formats for Smart Glasses to Support Pilots in General Aviation

    Get PDF
    This dissertation develops and evaluates various display formats for smart glasses which could provide information to support pilots in general aviation on flights under visual flight rules. The aim of a new display format is the reduction of pilot task load and the increase of pilot situation awareness. Under visual flight rules, pilots apply rules known as see-and-avoid. However, the monitoring of airspace conflicts with information acquisition from head-down instrumentation. Conventional displays may drive the pilot’s attention head-down at the expense of monitoring the scene outside, which has the potential to lead to breakdowns in task management. One of the main reasons for accidents is human error (84% in GA), which is associated with an increased workload resulting in a loss of situation awareness. One way to prevent accidents is to reduce workload to an adequate level and to increase situation awareness; the projection of supporting information in the head-up area could be one to do so. A proposed solution is the use of smart glasses, which project the most important information directly into the field of view. This dissertation is the only research work in the field that scientifically investigates the feasibility and utility of display formats for smart glasses for use in the cockpit of general aviation. The EPSON Moverio BT-200 smart glasses are selected based on set requirements for integration within the research flight simulator Diamond DA 40-180 at the Institute of Flight Systems and Automatic Control. Four different display formats are implemented and tested with regard to subjective- workload and usability in a preliminary simulator study with N = 7 participants. The results of the preliminary investigation show that the developed Primary Flight Display format has the highest usability and is therefore selected for further development. The Primary Flight Display format is further developed with consideration of the user feedback from the preliminary study. A new flight guidance symbology for lateral guidance, called Lateral Guidance Line (LGL), is designed and added to the format. A magenta colored line in the center of the format supports the pilot in maintaining track. The lateral guidance symbology is designed to show when to initiate a turn and when the turn should be completed in order to minimize deviations from a desired track (e.g. traffic pattern). In the final evaluation, the LGL format is evaluated with N = 20 pilots. In addition to assessing the subjective- usability and workload, the lateral deviations from a given flight path are recorded. Spatial awareness is operationalized through eye-tracking and a secondary reaction task using visual signals. Pilots fly twice, once with the LGL format on the smart glasses and once with conventional instruments without smart glasses at two different airfields in a balanced order. The effectiveness of the Lateral Guidance Line display format can be confirmed. The lateral deviations from the target trajectory are significantly lower in the group using the format compared to the group using conventional instruments (without smart glasses), while task load remained the same. An increase in eyes-out time as well as fewer missed signals on the secondary task proves the potential of the display format to increase spatial awareness compared to conventional instruments. The subjective suitability of the Lateral Guidance Line format was rated 73 (on a scale of 0 to 100) which corresponds to a good subjective usability and is not significantly different from the evaluations of the previously implemented prototypes. Overall, the results of the investigation show that smart glasses have the potential to support pilots in general aviation and to potentially reduce accident rates. Only few hardware challenges remain in the development of this format. The work draws on recommendations from the feedback of various general aviation interest groups and points out future research questions

    On informing the creation of assistive tools in virtual reality for severely visually disabled individuals

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Virtual Reality (VR) devices have advanced so dramatically in recent years that they are now capable of fully immersing users in experiences tailored to fit a multitude of needs. This emerging technology has far reaching potential, yet is primarily contained to the entertainment or gaming market, with limited considerations made for disabilities and accessibility. Identifying this gap, evaluating these newer VR devices for their suitability as accessibility aids is needed, and clear standards for successful disability VR design need to be defined and promoted to encourage greater inclusively going forward. To achieve this, a series of ophthalmology-informed tests were created and conducted against 24 participants with severe visual impairments. These tests were used as comparative benchmarks to determine the level of visual perception impaired users had while wearing a VR device against natural vision. Findings suggest that, under certain conditions, VR devices can greatly enhance visual acuity levels when used as replacements to natural vision or typical vision aids, without any enhancement made to account for visual impairments. Following findings and requirements elicited from participants, a prototype VR accessibility text reader and video player were developed allowing visually disabled persons to customise and configure specialised accessibility features for individualised needs. Qualitative usability testing involving 11 impaired participants alongside interviews fed into a iterative design process for better software refinement and were used to informed the creation of a VR accessibility framework for visual disabilities. User tests reported an overwhelmingly positive response to the tool as a feasible reading and viewing aid, allowing persons who could not engage (or, due to the difficulty, refusing to engage) in the reading and viewing of material to do so. Outcomes highlight that a VR device paired with the tested software would be an effective and affordable alternative to specialist head gear that is often expensive and lacking functionality & adaptability. These findings promote the use and future design of VR devices to be used as accessibility tools and visual aids, and provide a comparative benchmark, device usability guidelines, a design framework for VR accessibility, and the first VR accessibility software for reading and viewing.Beacon Centre for the Blind & University of Wolverhampton

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance
    corecore