5 research outputs found

    A method for viewing and interacting with medical volumes in virtual reality

    Get PDF
    The medical field has long benefited from advancements in diagnostic imaging technology. Medical images created through methods such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are used by medical professionals to non-intrusively peer into the body to make decisions about surgeries. Over time, the viewing medium of medical images has evolved from X-ray film negatives to stereoscopic 3D displays, with each new development enhancing the viewer’s ability to discern detail or decreasing the time needed to produce and render a body scan. Though doctors and surgeons are trained to view medical images in 2D, some are choosing to view body scans in 3D through volume rendering. While traditional 2D displays can be used to display 3D data, a viewing method that incorporates depth would convey more information to the viewer. One device that has shown promise in medical image viewing applications is the Virtual Reality Head Mounted Display (VR HMD). VR HMDs have recently increased in popularity, with several commodity devices being released within the last few years. The Oculus Rift, HTC Vive, and Windows Mixed Reality HMDs like the Samsung Odyssey offer higher resolution screens, more accurate motion tracking, and lower prices than earlier HMDs. They also include motion-tracked handheld controllers meant for navigation and interaction in video games. Because of their popularity and low cost, medical volume viewing software that is compatible with these headsets would be accessible to a wide audience. However, the introduction of VR to medical volume rendering presents difficulties in implementing consistent user interactions and ensuring performance. Though all three headsets require unique driver software, they are compatible with OpenVR, a middleware that standardizes communication between the HMD, the HMD’s controllers, and VR software. However, the controllers included with the HMDs each has a slightly different control layout. Furthermore, buttons, triggers, touchpads, and joysticks that share the same hand position between devices do not report values to OpenVR in the same way. Implementing volume rendering functions like clipping and tissue density windowing on VR controllers could improve the user’s experience over mouse-and-keyboard schemes through the use of tracked hand and finger movements. To create a control scheme that is compatible with multiple HMD’s A way of mapping controls differently depending on the device was developed. Additionally, volume rendering is a computationally intensive process, and even more so when rendering for an HMD. By using techniques like GPU raytracing with modern GPUs, real-time framerates are achievable on desktop computers with traditional displays. However, the importance of achieving high framerates is even greater when viewing with a VR HMD due to its higher level of immersion. Because the 3D scene occupies most of the user’s field of view, low or choppy framerates contribute to feelings of motion sickness. This was mitigated through a decrease in volume rendering quality in situations where the framerate drops below acceptable levels. The volume rendering and VR interaction methods described in this thesis were demonstrated in an application developed for immersive viewing of medical volumes. This application places the user and a medical volume in a 3D VR environment, allowing the user to manually place clipping planes, adjust the tissue density window, and move the volume to achieve different viewing angles with handheld motion tracked controllers. The result shows that GPU raytraced medical volumes can be viewed and interacted with in VR using commodity hardware, and that a control scheme can be mapped to allow the same functions on different HMD controllers despite differences in layout

    An Interactive Visualization and Navigation Tool for Medical Volume Data

    No full text
    Interactive direct volume rendering by hardware assisted 3D texture mapping has become a powerful visualization method in many different fields. However, to make this technique fully practicable convenient visualization options and data analysis tools have to be integrated. For example, direct rendering of semi-transparent volume objects with integrated display of lighted iso-surfaces is one important challenge especially in medical applications. Furthermore, explicit use of multi-dimensional image processing operations often helps to optimize the exploration of the available data sets. On the other hand, only if interactive frame rates can be guaranteed, such visualization tools will be accepted in medical planing and surgery simulation systems. In this paper we propose a volume visualization tool for large scale medical volume data which takes advantage of hardware assisted 3D texture interpolation and convolution operations. We demonstrate how to use the 3D texture mapping capabilit..

    The development of problem solving environments for computational engineering.

    Get PDF
    This thesis presents two Problem Solving Environments that enable engineers in industry to utilise complex computational simulation algorithms during their design processes. The work addresses the issues of allowing the end user to interact with the algorithms in a user-friendly manner through the use of graphical user interface design and advanced computer graphics. Throughout this thesis major emphasis is placed on being able to tackle a wide range of problem sizes from routine to grand challenge simulations through the use of parallel computing hardware. The effectiveness of both the environments in their domain is demonstrated using a series of examples

    Neuroinformatics in Functional Neuroimaging

    Get PDF
    This Ph.D. thesis proposes methods for information retrieval in functional neuroimaging through automatic computerized authority identification, and searching and cleaning in a neuroscience database. Authorities are found through cocitation analysis of the citation pattern among scientific articles. Based on data from a single scientific journal it is shown that multivariate analyses are able to determine group structure that is interpretable as particular “known ” subgroups in functional neuroimaging. Methods for text analysis are suggested that use a combination of content and links, in the form of the terms in scientific documents and scientific citations, respectively. These included context sensitive author ranking and automatic labeling of axes and groups in connection with multivariate analyses of link data. Talairach foci from the BrainMap ™ database are modeled with conditional probability density models useful for exploratory functional volumes modeling. A further application is shown with conditional outlier detection where abnormal entries in the BrainMap ™ database are spotted using kernel density modeling and the redundancy between anatomical labels and spatial Talairach coordinates. This represents a combination of simple term and spatial modeling. The specific outliers that were found in the BrainMap ™ database constituted among others: Entry errors, errors in the article and unusual terminology
    corecore