421 research outputs found

    Immersive Visualization in Biomedical Computational Fluid Dynamics and Didactic Teaching and Learning

    Get PDF
    Virtual reality (VR) can stimulate active learning, critical thinking, decision making and improved performance. It requires a medium to show virtual content, which is called a virtual environment (VE). The MARquette Visualization Lab (MARVL) is an example of a VE. Robust processes and workflows that allow for the creation of content for use within MARVL further increases the userbase for this valuable resource. A workflow was created to display biomedical computational fluid dynamics (CFD) and complementary data in a wide range of VE’s. This allows a researcher to study the simulation in its natural three-dimensional (3D) morphology. In addition, it is an exciting way to extract more information from CFD results by taking advantage of improved depth cues, a larger display canvas, custom interactivity, and an immersive approach that surrounds the researcher. The CFD to VR workflow was designed to be basic enough for a novice user. It is also used as a tool to foster collaboration between engineers and clinicians. The workflow aimed to support results from common CFD software packages and across clinical research areas. ParaView, Blender and Unity were used in the workflow to take standard CFD files and process them for viewing in VR. Designated scripts were written to automate the steps implemented in each software package. The workflow was successfully completed across multiple biomedical vessels, scales and applications including: the aorta with application to congenital cardiovascular disease, the Circle of Willis with respect to cerebral aneurysms, and the airway for surgical treatment planning. The workflow was completed by novice users in approximately an hour. Bringing VR further into didactic teaching within academia allows students to be fully immersed in their respective subject matter, thereby increasing the students’ sense of presence, understanding and enthusiasm. MARVL is a space for collaborative learning that also offers an immersive, virtual experience. A workflow was created to view PowerPoint presentations in 3D using MARVL. A resulting Immersive PowerPoint workflow used PowerPoint, Unity and other open-source software packages to display the PowerPoint presentations in 3D. The Immersive PowerPoint workflow can be completed in under thirty minutes

    Heads Up! Supporting Maritime Navigation using Augmented Reality

    Get PDF
    Augmented Reality (AR) is a technology that shows potential for the improvement of maritime safety. Today, the ship bridge suffers from a lack of standardization and integration. Head-Mounted Displays (HMDs) may alleviate these challenges by showing information when relevant and enhancing operator mobility. Microsoft HoloLens 2 (HL2) is such a HMD. Prior research shows the potential of HMDs in the Maritime AR domain (Rowen et al., 2019). Limited research has been conducted however on the design of AR User Interfaces (UIs) for maritime applications leveraging HMDs. As a result, no framework exists to test new UI designs in the real world, which is necessary due to many variables that cannot be accurately modelled in a lab setting. This led to the research questions (RQs) 1. What makes an effective head-mounted AR UI for maritime navigation? (RQ1); and 2. How can HL2 be used as a ship bridge system? (RQ2) A Research through Design (RtD) process is detailed where a UI design and functional prototype was developed in collaboration with end-users. The prototype, named Sjør, implements the aforementioned interface, provides a framework for in-context UI testing and can be viewed as the next step towards standardizing AR UIs for the maritime industry. The design and development process led to three contributions to the Maritime AR domain. Firstly, a framework for the visualization of location-based data about points of interest on predefined canvases co-located in the real world was developed (Technology Readiness Level (TRL) 6), which runs on the HL2. This first contribution is defined in Section 4 and provides an answer to RQ2. Secondly, using this framework, an interface design (including interactions) is developed in collaboration with end-users and proposed as an answer to RQ1. This process is described in Section 5. The third contribution is a research agenda which provides insights into how contemporary and future research can leverage the developed framework. Section 7 discloses this research agenda.Master's Thesis in Interaction and Media DesignMIX350MASV-MI

    Providing 3D video services: the challenge from 2D to 3DTV quality of experience

    Get PDF
    Recently, three-dimensional (3D) video has decisively burst onto the entertainment industry scene, and has arrived in households even before the standardization process has been completed. 3D television (3DTV) adoption and deployment can be seen as a major leap in television history, similar to previous transitions from black and white (B&W) to color, from analog to digital television (TV), and from standard definition to high definition. In this paper, we analyze current 3D video technology trends in order to define a taxonomy of the availability and possible introduction of 3D-based services. We also propose an audiovisual network services architecture which provides a smooth transition from two-dimensional (2D) to 3DTV in an Internet Protocol (IP)-based scenario. Based on subjective assessment tests, we also analyze those factors which will influence the quality of experience in those 3D video services, focusing on effects of both coding and transmission errors. In addition, examples of the application of the architecture and results of assessment tests are provided

    Development of Virtual Reality Games for Motor Rehabilitation

    Get PDF
    Motor rehabilitation is a long term, labor intensive and patient-specific process that requires one-on-one care from skilled clinicians and physiotherapists. Virtual rehabilitation is an alternative rehabilitation technology that can provide intensive motor training with minimal supervision from physiotherapists. However, virtual rehabilitation exercises lack of realism and less connected with Activities of Daily Livings. In this paper, we present six Virtual Reality games that we developed for 5DT data glove, 1-DOF IntelliStretch robot and Xbox Kinect to improve the accessibility of motor rehabilitation

    Stereoscopic high dynamic range imaging

    Get PDF
    Two modern technologies show promise to dramatically increase immersion in virtual environments. Stereoscopic imaging captures two images representing the views of both eyes and allows for better depth perception. High dynamic range (HDR) imaging accurately represents real world lighting as opposed to traditional low dynamic range (LDR) imaging. HDR provides a better contrast and more natural looking scenes. The combination of the two technologies in order to gain advantages of both has been, until now, mostly unexplored due to the current limitations in the imaging pipeline. This thesis reviews both fields, proposes stereoscopic high dynamic range (SHDR) imaging pipeline outlining the challenges that need to be resolved to enable SHDR and focuses on capture and compression aspects of that pipeline. The problems of capturing SHDR images that would potentially require two HDR cameras and introduce ghosting, are mitigated by capturing an HDR and LDR pair and using it to generate SHDR images. A detailed user study compared four different methods of generating SHDR images. Results demonstrated that one of the methods may produce images perceptually indistinguishable from the ground truth. Insights obtained while developing static image operators guided the design of SHDR video techniques. Three methods for generating SHDR video from an HDR-LDR video pair are proposed and compared to the ground truth SHDR videos. Results showed little overall error and identified a method with the least error. Once captured, SHDR content needs to be efficiently compressed. Five SHDR compression methods that are backward compatible are presented. The proposed methods can encode SHDR content to little more than that of a traditional single LDR image (18% larger for one method) and the backward compatibility property encourages early adoption of the format. The work presented in this thesis has introduced and advanced capture and compression methods for the adoption of SHDR imaging. In general, this research paves the way for a novel field of SHDR imaging which should lead to improved and more realistic representation of captured scenes

    Motion parallax for 360° RGBD video

    Get PDF
    We present a method for adding parallax and real-time playback of 360° videos in Virtual Reality headsets. In current video players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today''s most popular 360° stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience than without parallax, increasing immersion while reducing discomfort and nausea

    Best of Both Worlds: Merging 360Ëš Image Capture with 3D Reconstructed Environments for Improved Immersion in Virtual Reality

    Get PDF
    With the recent proliferation of high-quality 360° photos and video, consumers of virtual reality (VR) media have come to expect photorealistic immersive content. Most 360° VR content, however, is captured with monoscopic camera rigs and inherently fails to provide users with a sense of 3D depth and 6 degree-of-freedom (DOF) mobility. As a result, the medium is significantly limited in its immersive quality. This thesis aims to demonstrate how content creators can further bridge the gap between 360° content and fully immersive real-world VR simulations. We attempt to design a method that combines monoscopic 360° image capture with 3D reconstruction -- taking advantage of the best qualities of both technologies while only using consumer-grade equipment. By mapping the texture from panoramic 360° images to the 3D geometry of a scene, this system significantly improves the photo-realism of 3D reconstructed spaces at specific points of interest in a virtual environment. The technical hurdles faced during the course of this research work, and areas of further work needed to perfect the system, are discussed in detail. Once perfected, a user of the system should be able to simultaneously appreciate visual detail in 360-degrees while experiencing full mobility, i.e., to move around within the immersed scene.Bachelor of Art

    Depth Mapping for Stereoscopic Videos

    Full text link

    A Virtual Testbed for Fish-Tank Virtual Reality: Improving Calibration with a Virtual-in-Virtual Display

    Get PDF
    With the development of novel calibration techniques for multimedia projectors and curved projection surfaces, volumetric 3D displays are becoming easier and more affordable to build. The basic requirements include a display shape that defines the volume (e.g. a sphere, cylinder, or cuboid) and a tracking system to provide each user's location for the perspective corrected rendering. When coupled with modern graphics cards, these displays are capable of high resolution, low latency, high frame rate, and even stereoscopic rendering; however, like many previous studies have shown, every component must be precisely calibrated for a compelling 3D effect. While human perceptual requirements have been extensively studied for head-tracked displays, most studies featured seated users in front of a flat display. It remains unclear if results from these flat display studies are applicable to newer, walk-around displays with enclosed or curved shapes. To investigate these issues, we developed a virtual testbed for volumetric head-tracked displays that can measure calibration accuracy of the entire system in real-time. We used this testbed to investigate visual distortions of prototype curved displays, improve existing calibration techniques, study the importance of stereo to performance and perception, and validate perceptual calibration with novice users. Our experiments show that stereo is important for task performance, but requires more accurate calibration, and that novice users can make effective use of perceptual calibration tools. We also propose a novel, real-time calibration method that can be used to fine-tune an existing calibration using perceptual feedback. The findings from this work can be used to build better head-tracked volumetric displays with an unprecedented amount of 3D realism and intuitive calibration tools for novice users
    • …
    corecore