659 research outputs found

    RotateEntry: Controller-rolling-style Text Entry for Three Degrees of Freedom Virtual Reality Devices

    Get PDF
    In this work, we propose RotateEntry, a controller-rolling-style method for text entry on three degrees of freedom virtual reality devices. To move the key-selecting cursor in two dimensions on a QWERTY layout virtual keyboard, we developed three variants of RotateEntry: Rotate Column Rotate, Rotate Key, and Rotate Column Point. We conducted a comparative empirical evaluation of the four text input methods, including three proposed controller-rolling-style text input methods and the standard raycasting-style one. Text entry performance, accuracy, workload, usability, and user experience were tested and evaluated. Due to the COVID-19 situation, our study was conducted remotely. The impact of using online formats on VR research had also been assessed. After evaluating with 5 participants, we identified that Rotate Key had a higher text entry rate, outstanding overall user experience, and excellent overall workload performance among the three variants of RotateEntry. However, no evidence had been investigated to support the hypothesis that RotateEntry had better performance and experience compared to Raycasting

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future

    Two-step techniques for accurate selection of small elements in VR environments

    Get PDF
    One of the key interactions in 3D environments is target acquisition, which can be challenging when targets are small or in cluttered scenes. Here, incorrect elements may be selected, leading to frustration and wasted time. The accuracy is further hindered by the physical act of selection itself, typically involving pressing a button. This action reduces stability, increasing the likelihood of erroneous target acquisition. We focused on molecular visualization and on the challenge of selecting atoms, rendered as small spheres. We present two techniques that improve upon previous progressive selection techniques. They facilitate the acquisition of neighbors after an initial selection, providing a more comfortable experience compared to using classical ray-based selection, particularly with occluded elements. We conducted a pilot study followed by two formal user studies. The results indicated that our approaches were highly appreciated by the participants. These techniques could be suitable for other crowded environments as well.This paper has been supported by TIN2017-88515-C2-1-R (GEN3DLIVE), from the Spanish Ministerio de EconomĂ­a y Competitividad and PID2021-122136OB-C21 from the Ministerio de Ciencia e InnovaciĂłn, Spain, by 839 FEDER (EU) funds. Elena Molina has been supported by FI-SDUR doctoral grant from Generalitat de Catalunya, and FPU grant from the Ministerio de Ciencia e InnovaciĂłn, Spain .Peer ReviewedPostprint (published version

    Accurate molecular atom selection in VR

    Get PDF
    Accurate selection in cluttered scenes is complex because a high amount of precision is required. In Virtual Reality Environments, it is even worse, because it is more difficult for us to point a small object with our arms in the air. Not only our arms move slightly, but the button/trigger press reduces our weak stability. In this paper, we present two alternatives to the classical ray pointing intended to facilitate the selection of atoms in molecular environments. We have implemented and analyzed such techniques through an informal user study and found that they were highly appreciated by the users. This selection method could be interesting in other crowded environments beyond molecular visualization.Peer ReviewedPostprint (published version

    Storytelling in the Metaverse: From Desktop to Immersive Virtual Reality Storyboarding

    Get PDF
    Creatives from the animation and film industries have always been experimenting with innovative tools and methodologies to improve the creation of prototypes of their visual sequences before bringing them to life. In recent years, as realistic real-time rendering techniques have emerged, the increasing popularity of virtual reality (VR) can lead to new approaches and solutions, leveraging the immersive and interactive features provided by 3D immersive experiences. A 3D desktop application and a novel storyboarding pipeline, which can automatically generate a storyboard including camera details and a textual description of the actions performed in three-dimensional environments, have already been investigated in previous work. The aim was to exploit new technologies to improve existing 3D storytelling approaches, thus providing a software solution for expert and novice storyboarders. This research investigates 3D storyboarding in immersive virtual reality (IVR) to move toward a new storyboarding paradigm. IVR systems provide peculiarities such as body-controlled exploration of the 3D scene and a head-dependant camera view that can extend features of traditional storyboarding tools. The proposed system enables users to set up the virtual stage, adding elements to the scene and exploring the environment as they build it. After that, users can select the available characters or the camera, control them in first person, position them in the scene, and perform actions selecting from a list of options, each paired with a corresponding animation. Relying on the concept of state-machine, the system can automatically generate the list of available actions depending on the context. Finally, the descriptions for each storyboard panel are automatically generated based on the history of activities performed. The proposed application maintains all the functionalities of the desktop version and can be effectively used to create storyboards in immersive virtual environments

    Dense and Dynamic 3D Selection for Game-Based Virtual Environments

    Full text link

    Friction surfaces: scaled ray-casting manipulation for interacting with 2D GUIs

    Get PDF
    The accommodation of conventional 2D GUIs with Virtual Environments (VEs) can greatly enhance the possibilities of many VE applications. In this paper we present a variation of the well-known ray-casting technique for fast and accurate selection of 2D widgets over a virtual window immersed into a 3D world. The main idea is to provide a new interaction mode where hand rotations are scaled down so that the ray is constrained to intersect the active virtual window. This is accomplished by changing the control-display ratio between the orientation of the user’s hand and the ray used for selection. Our technique uses a curved representation of the ray providing visual feedback of the orientation of both the input device and the selection ray. The users’ feeling is that they control a flexible ray that gets curved as it moves over a virtual friction surface defined by the 2D window. We have implemented this technique and evaluated its effectiveness in terms of accuracy and performance. Our experiments on a four-sided CAVE indicate that the proposed technique can increase the speed and accuracy of component selection in 2D GUIs immersed into 3D worlds.Peer ReviewedPostprint (author’s final draft

    Overlaying Virtual Scale Models on Real Environments Without the Use of Peripherals

    Get PDF
    The Architecture, Engineering, and Construction (AEC) industries have become increasingly reliant on detailed 3D modeling software and visualization tools. In the past few years, the arrival of effective and relatively cheap virtual reality has transformed the workflows of people in these professions. Augmented reality (AR) is poised to have a similar if not greater effect. For that to occur, the transition into using this technology must be seamless and straightforward. In particular, a clear bridge should be made between Computer Aided Design (CAD) models and their corresponding real-world environments. Using Unity and Microsoft\u27s Hololens, I developed a method of automatically overlaying a scale model of a virtual room on its real-life counterpart, allowing a user to walk around their physical environment and see how virtual features correspond to what is already built
    • …
    corecore