64,850 research outputs found

    The residual STL volume as a metric to evaluate accuracy and reproducibility of anatomic models for 3D printing: application in the validation of 3D-printable models of maxillofacial bone from reduced radiation dose CT images.

    Get PDF
    BackgroundThe effects of reduced radiation dose CT for the generation of maxillofacial bone STL models for 3D printing is currently unknown. Images of two full-face transplantation patients scanned with non-contrast 320-detector row CT were reconstructed at fractions of the acquisition radiation dose using noise simulation software and both filtered back-projection (FBP) and Adaptive Iterative Dose Reduction 3D (AIDR3D). The maxillofacial bone STL model segmented with thresholding from AIDR3D images at 100 % dose was considered the reference. For all other dose/reconstruction method combinations, a "residual STL volume" was calculated as the topologic subtraction of the STL model derived from that dataset from the reference and correlated to radiation dose.ResultsThe residual volume decreased with increasing radiation dose and was lower for AIDR3D compared to FBP reconstructions at all doses. As a fraction of the reference STL volume, the residual volume decreased from 2.9 % (20 % dose) to 1.4 % (50 % dose) in patient 1, and from 4.1 % to 1.9 %, respectively in patient 2 for AIDR3D reconstructions. For FBP reconstructions it decreased from 3.3 % (20 % dose) to 1.0 % (100 % dose) in patient 1, and from 5.5 % to 1.6 %, respectively in patient 2. Its morphology resembled a thin shell on the osseous surface with average thickness <0.1 mm.ConclusionThe residual volume, a topological difference metric of STL models of tissue depicted in DICOM images supports that reduction of CT dose by up to 80 % of the clinical acquisition in conjunction with iterative reconstruction yields maxillofacial bone models accurate for 3D printing

    Master slave en-face OCT/SLO

    Get PDF
    Master Slave optical coherence tomography (MS-OCT) is an OCT method that does not require resampling of data and can be used to deliver en-face images from several depths simultaneously. As the MS-OCT method requires important computational resources, the number of multiple depth en-face images that can be produced in real-time is limited. Here, we demonstrate progress in taking advantage of the parallel processing feature of the MS-OCT technology. Harnessing the capabilities of graphics processing units (GPU)s, information from 384 depth positions is acquired in one raster with real time display of up to 40 en-face OCT images. These exhibit comparable resolution and sensitivity to the images produced using the conventional Fourier domain based method. The GPU facilitates versatile real time selection of parameters, such as the depth positions of the 40 images out of the set of 384 depth locations, as well as their axial resolution. In each updated displayed frame, in parallel with the 40 en-face OCT images, a scanning laser ophthalmoscopy (SLO) lookalike image is presented together with two B-scan OCT images oriented along rectangular directions. The thickness of the SLO lookalike image is dynamically determined by the choice of number of en-face OCT images displayed in the frame and the choice of differential axial distance between them

    Three-dimensional analysis of mandibular landmarks, planes and shape, and the symphyseal changes associated with growth and orthodontic treatment

    Full text link
    OBJECTIVE: To test reliability of 3D mandibular landmarks, planes of reference and surfaces and assess their correlation to conventional 2D cephalometric measurements. To analyze changes in three-dimensional shape of the symphysis due to growth and orthodontic treatment. METHODS: This was a retrospective analysis of CBCTs of healthy orthodontic patients. 32 subjects were included, 16 males and 16 females. Mean ages of 10.6 ± 1.5 years and 15.0 ± 0.9 years before and after treatment, respectively. The mean follow up time was 4.3 years. Subjects free of any craniofacial anomalies, and no observable pathology on panoramic radiograph were. 15 subjects had CVM 1 and 17 subjects had CVM 2 before orthodontic treatment. All subjects had CVM 5 after orthodontic treatment. For the first phase, 3D mandibular landmark identifications were digitized. Planes and landmarks were constructed and compared with conventional 2D mandibular measurements. For the second phase, mandibles were isolated by removing surrounding structures. Pearson correlation and paired t-test were performed to test for correlation and differences between 2D and 3D measurements, respectively. Statistical analysis was performed using SAS 9.4. Software. MorphoJ software (Version 2.0, www.flywings.org.uk) was used for symphysis shape analysis; and Discriminant Function Analysis (DFA) between pre-treatment and post-treatment was used for statistical analysis of the symphysis. RESULTS: We found statistical significant positive correlation between 2D and 3D pre-treatment ramus height (P-value =0.01), post-treatment ramus height (P-value < 0.0001), pre-treatment corpus length (P-value 0.0003), post-treatment corpus length (P-value 0.04), pre-treatment gonial angle (P-value <0.0001), and post-treatment gonial angle (P-value=0.05). Also, statistically significant differences in 2D ramus height (P=0.001), 3D ramus height (P-value=0.002), 2D corpus length (P-value <0.01), and 3D corpus length (P-value <0.01). For symphysis shape comparing between pre-treatment and post-treatment, we found that there is no statistically significant difference between them (P-value= 0.99). CONCLUSION: These results demonstrated statistically significant positive correlation between certain 2D and 3D measurements, pre-treatment and post-treatment differences in 2D and 3D measurements showed consistent results. Symphysis shapes do break out as distinctly separate groups, but the differences between the means is small

    Development of an augmented reality guided computer assisted orthopaedic surgery system

    Get PDF
    Previously held under moratorium from 1st December 2016 until 1st December 2021.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system – ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system – ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively

    Virtual Reality Aided Mobile C-arm Positioning for Image-Guided Surgery

    Get PDF
    Image-guided surgery (IGS) is the minimally invasive procedure based on the pre-operative volume in conjunction with intra-operative X-ray images which are commonly captured by mobile C-arms for the confirmation of surgical outcomes. Although currently some commercial navigation systems are employed, one critical issue of such systems is the neglect regarding the radiation exposure to the patient and surgeons. In practice, when one surgical stage is finished, several X-ray images have to be acquired repeatedly by the mobile C-arm to obtain the desired image. Excessive radiation exposure may increase the risk of some complications. Therefore, it is necessary to develop a positioning system for mobile C-arms, and achieve one-time imaging to avoid the additional radiation exposure. In this dissertation, a mobile C-arm positioning system is proposed with the aid of virtual reality (VR). The surface model of patient is reconstructed by a camera mounted on the mobile C-arm. A novel registration method is proposed to align this model and pre-operative volume based on a tracker, so that surgeons can visualize the hidden anatomy directly from the outside view and determine a reference pose of C-arm. Considering the congested operating room, the C-arm is modeled as manipulator with a movable base to maneuver the image intensifier to the desired pose. In the registration procedure above, intensity-based 2D/3D registration is used to transform the pre-operative volume into the coordinate system of tracker. Although it provides a high accuracy, the small capture range hinders its clinical use due to the initial guess. To address such problem, a robust and fast initialization method is proposed based on the automatic tracking based initialization and multi-resolution estimation in frequency domain. This hardware-software integrated approach provides almost optimal transformation parameters for intensity-based registration. To determine the pose of mobile C-arm, high-quality visualization is necessary to locate the pathology in the hidden anatomy. A novel dimensionality reduction method based on sparse representation is proposed for the design of multi-dimensional transfer function in direct volume rendering. It not only achieves the similar performance to the conventional methods, but also owns the capability to deal with the large data sets

    Performance Factors in Neurosurgical Simulation and Augmented Reality Image Guidance

    Get PDF
    Virtual reality surgical simulators have seen widespread adoption in an effort to provide safe, cost-effective and realistic practice of surgical skills. However, the majority of these simulators focus on training low-level technical skills, providing only prototypical surgical cases. For many complex procedures, this approach is deficient in representing anatomical variations that present clinically, failing to challenge users’ higher-level cognitive skills important for navigation and targeting. Surgical simulators offer the means to not only simulate any case conceivable, but to test novel approaches and examine factors that influence performance. Unfortunately, there is a void in the literature surrounding these questions. This thesis was motivated by the need to expand the role of surgical simulators to provide users with clinically relevant scenarios and evaluate human performance in relation to image guidance technologies, patient-specific anatomy, and cognitive abilities. To this end, various tools and methodologies were developed to examine cognitive abilities and knowledge, simulate procedures, and guide complex interventions all within a neurosurgical context. The first chapter provides an introduction to the material. The second chapter describes the development and evaluation of a virtual anatomical training and examination tool. The results suggest that learning occurs and that spatial reasoning ability is an important performance predictor, but subordinate to anatomical knowledge. The third chapter outlines development of automation tools to enable efficient simulation studies and data management. In the fourth chapter, subjects perform abstract targeting tasks on ellipsoid targets with and without augmented reality guidance. While the guidance tool improved accuracy, performance with the tool was strongly tied to target depth estimation – an important consideration for implementation and training with similar guidance tools. In the fifth chapter, neurosurgically experienced subjects were recruited to perform simulated ventriculostomies. Results showed anatomical variations influence performance and could impact outcome. Augmented reality guidance showed no marked improvement in performance, but exhibited a mild learning curve, indicating that additional training may be warranted. The final chapter summarizes the work presented. Our results and novel evaluative methodologies lay the groundwork for further investigation into simulators as versatile research tools to explore performance factors in simulated surgical procedures
    • …
    corecore