2,268 research outputs found

    Determination of critical factors for fast and accurate 2D medical image deformation

    Get PDF
    The advent of medical imaging technology enabled physicians to study patient anatomy non-invasively and revolutionized the medical community. As medical images have become digitized and the resolution of these images has increased, software has been developed to allow physicians to explore their patients\u27 image studies in an increasing number of ways by allowing viewing and exploration of reconstructed three-dimensional models. Although this has been a boon to radiologists, who specialize in interpreting medical images, few software packages exist that provide fast and intuitive interaction for other physicians. In addition, although the users of these applications can view their patient data at the time the scan was taken, the placement of the tissues during a surgical intervention is often different due to the position of the patient and methods used to provide a better view of the surgical field. None of the commonly available medical image packages allow users to predict the deformation of the patient\u27s tissues under those surgical conditions. This thesis analyzes the performance and accuracy of a less computationally intensive yet physically-based deformation algorithm- the extended ChainMail algorithm. The proposed method allows users to load DICOM images from medical image studies, interactively classify the tissues in those images according to their properties under deformation, deform the tissues in two dimensions, and visualize the result. The method was evaluated using data provided by the Truth Cube experiment, where a phantom made of material with properties similar to liver under deformation was placed under varying amounts of uniaxial strain. CT scans were before and after the deformations. The deformation was performed on a single DICOM image from the study that had been manually classified as well as on data sets generated from that original image. These generated data sets were ideally segmented versions of the phantom images that had been scaled to varying fidelities in order to evaluate the effect of image size on the algorithm\u27s accuracy and execution time. Two variations of the extended ChainMail algorithm parameters were also implemented for each of the generated data sets in order to examine the effect of the parameters. The resultant deformations were compared with the actual deformations as determined by the Truth Cube experimenters. For both variations of the algorithm parameters, the predicted deformations at 5% uniaxial strain had an RMS error of a similar order of magnitude to the errors in a finite element analysis performed by the truth cube experimenters for the deformations at 18.25% strain. The average error was able to be reduced by approximately between 10-20% for the lower fidelity data sets through the use of one of the parameter schemes, although the benefit decreased as the image size increased. When the algorithm was evaluated under 18.25% strain, the average errors were more than 8 y times that of the errors in the finite element analysis. Qualitative analysis of the deformed images indicated differing degrees of accuracy across the ideal image set, with the largest displacements estimated closer to the initial point of deformation. This is hypothesized to be a result of the order in which deformation was processed for points in the image. The algorithm execution time was examined for the varying generated image fidelities. For a generated image that was approximately 18.5% of the size of the tissue in the original image, the execution time was less than 15 seconds. In comparison, the algorithm processing time for the full-scale image was over 3 y hours. The analysis of the extended ChainMail algorithm for use in medical image deformation emphasizes the importance of the choice of algorithm parameters on the accuracy of the deformations and of data set size on the processing time

    A method for viewing and interacting with medical volumes in virtual reality

    Get PDF
    The medical field has long benefited from advancements in diagnostic imaging technology. Medical images created through methods such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are used by medical professionals to non-intrusively peer into the body to make decisions about surgeries. Over time, the viewing medium of medical images has evolved from X-ray film negatives to stereoscopic 3D displays, with each new development enhancing the viewer’s ability to discern detail or decreasing the time needed to produce and render a body scan. Though doctors and surgeons are trained to view medical images in 2D, some are choosing to view body scans in 3D through volume rendering. While traditional 2D displays can be used to display 3D data, a viewing method that incorporates depth would convey more information to the viewer. One device that has shown promise in medical image viewing applications is the Virtual Reality Head Mounted Display (VR HMD). VR HMDs have recently increased in popularity, with several commodity devices being released within the last few years. The Oculus Rift, HTC Vive, and Windows Mixed Reality HMDs like the Samsung Odyssey offer higher resolution screens, more accurate motion tracking, and lower prices than earlier HMDs. They also include motion-tracked handheld controllers meant for navigation and interaction in video games. Because of their popularity and low cost, medical volume viewing software that is compatible with these headsets would be accessible to a wide audience. However, the introduction of VR to medical volume rendering presents difficulties in implementing consistent user interactions and ensuring performance. Though all three headsets require unique driver software, they are compatible with OpenVR, a middleware that standardizes communication between the HMD, the HMD’s controllers, and VR software. However, the controllers included with the HMDs each has a slightly different control layout. Furthermore, buttons, triggers, touchpads, and joysticks that share the same hand position between devices do not report values to OpenVR in the same way. Implementing volume rendering functions like clipping and tissue density windowing on VR controllers could improve the user’s experience over mouse-and-keyboard schemes through the use of tracked hand and finger movements. To create a control scheme that is compatible with multiple HMD’s A way of mapping controls differently depending on the device was developed. Additionally, volume rendering is a computationally intensive process, and even more so when rendering for an HMD. By using techniques like GPU raytracing with modern GPUs, real-time framerates are achievable on desktop computers with traditional displays. However, the importance of achieving high framerates is even greater when viewing with a VR HMD due to its higher level of immersion. Because the 3D scene occupies most of the user’s field of view, low or choppy framerates contribute to feelings of motion sickness. This was mitigated through a decrease in volume rendering quality in situations where the framerate drops below acceptable levels. The volume rendering and VR interaction methods described in this thesis were demonstrated in an application developed for immersive viewing of medical volumes. This application places the user and a medical volume in a 3D VR environment, allowing the user to manually place clipping planes, adjust the tissue density window, and move the volume to achieve different viewing angles with handheld motion tracked controllers. The result shows that GPU raytraced medical volumes can be viewed and interacted with in VR using commodity hardware, and that a control scheme can be mapped to allow the same functions on different HMD controllers despite differences in layout

    An interactive color pre-processing method to improve tumor segmentation in digital medical images

    Get PDF
    In the last few decades the medical imaging field has grown considerably, and new techniques such as computerized axial tomography (CAT) and Magnetic Resonance Imaging (MRI) are able to obtain medical images in noninvasive ways. These new technologies have opened the medical field, offering opportunities to improve patient diagnosis, education and training, treatment monitoring, and surgery planning. One of these opportunities is in the tumor segmentation field. Tumor segmentation is the process of virtually extracting the tumor from the healthy tissues of the body by computer algorithms. This is a complex process since tumors have different shapes, sizes, tissue densities, and locations. The algorithms that have been developed cannot take into account all these variations and higher accuracy is achieved with specialized methods that generally work with specific types of tissue data. In this thesis a color pre-processing method for segmentation is presented. Most tumor segmentation methods are based on grayscale values of the medical images. The method proposed in this thesis adds color information to the original values of the image. The user selects the region of interest (ROI), usually the tumor, from the grayscale medical image and from this initial selection, the image is mapped into a colored space. Tissue densities that are part of the tumor are assigned an RGB component and any tissues outside the tumor are set to black. The user can tweak the color ranges in real time to achieve better results, in cases where the tumor pixels are non-homogenous in terms of intensity. The user then places a seed in the center of the tumor and begins segmentation. A pixel in the image is segmented as part of the tumor if it\u27s within an initial 10% threshold. This threshold is determined if the seed is within the average RGB values of the tumor, and within the search region. The search region is calculated by growing or shrinking the previous region using the information or previous segmented regions of the set of slices. The method automatically segments all the slices on the set from the inputs of the first slice. All through the segmentation process the user can tweak different parameters and visualize the segmentation results in real time. The method was run on ten test cases several runs were performed for each test cases. 10 out of the 20 test runs gave false positives of 25% or less, and 10 out of the 20 test runs gave false negatives of 25% or less. Using only grayscale thresholding methods the results for the same test cases show a false positive of up to 52% on the easy cases and up to 284% on the difficult cases, and false negatives of up to 14% on the easy cases and up to 99% on the difficult cases. While the results of the grayscale and color pre-processing methods on easy cases were similar, the results of color pre-processing were much better on difficult cases, thus supporting the claim that adding color to medical images for segmentation can significantly improve accuracy of tumor segmentation

    Estimating and abstracting the 3D structure of feline bones using neural networks on X-ray (2D) images

    Get PDF
    Computing 3D bone models using traditional Computed Tomography (CT) requires a high-radiation dose, cost and time. We present a fully automated, domain-agnostic method for estimating the 3D structure of a bone from a pair of 2D X-ray images. Our triplet loss-trained neural network extracts a 128-dimensional embedding of the 2D X-ray images. A classifier then finds the most closely matching 3D bone shape from a predefined set of shapes. Our predictions have an average root mean square (RMS) distance of 1.08 mm between the predicted and true shapes, making our approach more accurate than the average achieved by eight other examined 3D bone reconstruction approaches. Each embedding extracted from a 2D bone image is optimized to uniquely identify the 3D bone CT from which the 2D image originated and can serve as a kind of fingerprint of each bone; possible applications include faster, image content-based bone database searches for forensic purposes

    Design and validation of Segment - freely available software for cardiovascular image analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format.</p> <p>Results</p> <p>Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page <url>http://segment.heiberg.se</url>.</p> <p>Conclusions</p> <p>Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.</p

    The virtual human face – superimposing the simultaneously captured 3D photorealistic skin surface of the face on the untextured skin image of the CBCT Scan

    Get PDF
    The aim of this study was to evaluate the impact of simultaneous capture of the three-dimensional (3D) surface of the face and cone beam computed tomography (CBCT) scan of the skull on the accuracy of their registration and superimposition. 3D facial images were acquired in 14 patients using the Di3d (Dimensional Imaging, UK) imaging system and i-CAT CBCT scanner. One stereophotogrammetry image was captured at the same time as the CBCT and another one hour later. The two stereophotographs were then individually superimposed over the CBCT using VRmesh. Seven patches were isolated on the final merged surfaces. For the whole face and each individual patch; maximum and minimum range of deviation between surfaces, absolute average distance between surfaces, and standard deviation for the 90th percentile of the distance errors were calculated. The superimposition errors of the whole face for both captures revealed statistically significant differences (P=0.00081). The absolute average distances in both separate and simultaneous captures were 0.47mm and 0.27mm, respectively. The level of superimposition accuracy in patches from separate captures ranged between 0.3 and 0.9mm, while that of simultaneous captures was 0.4mm. Simultaneous capture of Di3d and CBCT images significantly improved the accuracy of superimposition of these image modalities
    • …
    corecore