590 research outputs found

    Ubiquitous volume rendering in the web platform

    Get PDF
    176 p.The main thesis hypothesis is that ubiquitous volume rendering can be achieved using WebGL. The thesis enumerates the challenges that should be met to achieve that goal. The results allow web content developers the integration of interactive volume rendering within standard HTML5 web pages. Content developers only need to declare the X3D nodes that provide the rendering characteristics they desire. In contrast to the systems that provide specific GPU programs, the presented architecture creates automatically the GPU code required by the WebGL graphics pipeline. This code is generated directly from the X3D nodes declared in the virtual scene. Therefore, content developers do not need to know about the GPU.The thesis extends previous research on web compatible volume data structures for WebGL, ray-casting hybrid surface and volumetric rendering, progressive volume rendering and some specific problems related to the visualization of medical datasets. Finally, the thesis contributes to the X3D standard with some proposals to extend and improve the volume rendering component. The proposals are in an advance stage towards their acceptance by the Web3D Consortium

    Nonrigid reconstruction of 3D breast surfaces with a low-cost RGBD camera for surgical planning and aesthetic evaluation

    Get PDF
    Accounting for 26% of all new cancer cases worldwide, breast cancer remains the most common form of cancer in women. Although early breast cancer has a favourable long-term prognosis, roughly a third of patients suffer from a suboptimal aesthetic outcome despite breast conserving cancer treatment. Clinical-quality 3D modelling of the breast surface therefore assumes an increasingly important role in advancing treatment planning, prediction and evaluation of breast cosmesis. Yet, existing 3D torso scanners are expensive and either infrastructure-heavy or subject to motion artefacts. In this paper we employ a single consumer-grade RGBD camera with an ICP-based registration approach to jointly align all points from a sequence of depth images non-rigidly. Subtle body deformation due to postural sway and respiration is successfully mitigated leading to a higher geometric accuracy through regularised locally affine transformations. We present results from 6 clinical cases where our method compares well with the gold standard and outperforms a previous approach. We show that our method produces better reconstructions qualitatively by visual assessment and quantitatively by consistently obtaining lower landmark error scores and yielding more accurate breast volume estimates

    A Survey on 3D Ultrasound Reconstruction Techniques

    Get PDF
    This book chapter aims to discuss the 3D ultrasound reconstruction and visualization. First, the various types of 3D ultrasound system are reviewed, such as mechanical, 2D array, position tracking-based freehand, and untracked-based freehand. Second, the 3D ultrasound reconstruction technique or pipeline used by the current existing system, which includes the data acquisition, data preprocessing, reconstruction method and 3D visualization, is discussed. The reconstruction method and 3D visualization will be emphasized. The reconstruction method includes the pixel-based method, volume-based method, and function-based method, accompanied with their benefits and drawbacks. In the 3D visualization, methods such as multiplanar reformatting, volume rendering, and surface rendering are presented. Lastly, its application in the medical field is reviewed as well

    Determination of critical factors for fast and accurate 2D medical image deformation

    Get PDF
    The advent of medical imaging technology enabled physicians to study patient anatomy non-invasively and revolutionized the medical community. As medical images have become digitized and the resolution of these images has increased, software has been developed to allow physicians to explore their patients\u27 image studies in an increasing number of ways by allowing viewing and exploration of reconstructed three-dimensional models. Although this has been a boon to radiologists, who specialize in interpreting medical images, few software packages exist that provide fast and intuitive interaction for other physicians. In addition, although the users of these applications can view their patient data at the time the scan was taken, the placement of the tissues during a surgical intervention is often different due to the position of the patient and methods used to provide a better view of the surgical field. None of the commonly available medical image packages allow users to predict the deformation of the patient\u27s tissues under those surgical conditions. This thesis analyzes the performance and accuracy of a less computationally intensive yet physically-based deformation algorithm- the extended ChainMail algorithm. The proposed method allows users to load DICOM images from medical image studies, interactively classify the tissues in those images according to their properties under deformation, deform the tissues in two dimensions, and visualize the result. The method was evaluated using data provided by the Truth Cube experiment, where a phantom made of material with properties similar to liver under deformation was placed under varying amounts of uniaxial strain. CT scans were before and after the deformations. The deformation was performed on a single DICOM image from the study that had been manually classified as well as on data sets generated from that original image. These generated data sets were ideally segmented versions of the phantom images that had been scaled to varying fidelities in order to evaluate the effect of image size on the algorithm\u27s accuracy and execution time. Two variations of the extended ChainMail algorithm parameters were also implemented for each of the generated data sets in order to examine the effect of the parameters. The resultant deformations were compared with the actual deformations as determined by the Truth Cube experimenters. For both variations of the algorithm parameters, the predicted deformations at 5% uniaxial strain had an RMS error of a similar order of magnitude to the errors in a finite element analysis performed by the truth cube experimenters for the deformations at 18.25% strain. The average error was able to be reduced by approximately between 10-20% for the lower fidelity data sets through the use of one of the parameter schemes, although the benefit decreased as the image size increased. When the algorithm was evaluated under 18.25% strain, the average errors were more than 8 y times that of the errors in the finite element analysis. Qualitative analysis of the deformed images indicated differing degrees of accuracy across the ideal image set, with the largest displacements estimated closer to the initial point of deformation. This is hypothesized to be a result of the order in which deformation was processed for points in the image. The algorithm execution time was examined for the varying generated image fidelities. For a generated image that was approximately 18.5% of the size of the tissue in the original image, the execution time was less than 15 seconds. In comparison, the algorithm processing time for the full-scale image was over 3 y hours. The analysis of the extended ChainMail algorithm for use in medical image deformation emphasizes the importance of the choice of algorithm parameters on the accuracy of the deformations and of data set size on the processing time
    corecore