362 research outputs found

    How accurate are the fusion of Cone-beam CT and 3-D stereophotographic images?

    Get PDF
    Background: Cone-beam Computed Tomography (CBCT) and stereophotography are two of the latest imaging modalities available for three-dimensional (3-D) visualization of craniofacial structures. However, CBCT provides only limited information on surface texture. This can be overcome by combining the bone images derived from CBCT with 3-D photographs. The objectives of this study were 1) to evaluate the feasibility of integrating 3-D Photos and CBCT images 2) to assess degree of error that may occur during the above processes and 3) to identify facial regions that would be most appropriate for 3-D image registration. Methodology: CBCT scans and stereophotographic images from 29 patients were used for this study. Two 3-D images corresponding to the skin and bone were extracted from the CBCT data. The 3-D photo was superimposed on the CBCT skin image using relatively immobile areas of the face as a reference. 3-D colour maps were used to assess the accuracy of superimposition were distance differences between the CBCT and 3-D photo were recorded as the signed average and the Root Mean Square (RMS) error. Principal Findings: The signed average and RMS of the distance differences between the registered surfaces were -0.018 (±0.129) mm and 0.739 (±0.239) mm respectively. The most errors were found in areas surrounding the lips and the eyes, while minimal errors were noted in the forehead, root of the nose and zygoma. Conclusions: CBCT and 3-D photographic data can be successfully fused with minimal errors. When compared to RMS, the signed average was found to under-represent the registration error. The virtual 3-D composite craniofacial models permit concurrent assessment of bone and soft tissues during diagnosis and treatment planning. © 2012 Jayaratne et al.published_or_final_versio

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    3D fusion of histology to multi-parametric MRI for prostate cancer imaging evaluation and lesion-targeted treatment planning

    Get PDF
    Multi-parametric magnetic resonance imaging (mpMRI) of localized prostate cancer has the potential to support detection, staging and localization of tumors, as well as selection, delivery and monitoring of treatments. Delineating prostate cancer tumors on imaging could potentially further support the clinical workflow by enabling precise monitoring of tumor burden in active-surveillance patients, optimized targeting of image-guided biopsies, and targeted delivery of treatments to decrease morbidity and improve outcomes. Evaluating the performance of mpMRI for prostate cancer imaging and delineation ideally includes comparison to an accurately registered reference standard, such as prostatectomy histology, for the locations of tumor boundaries on mpMRI. There are key gaps in knowledge regarding how to accurately register histological reference standards to imaging, and consequently further gaps in knowledge regarding the suitability of mpMRI for tasks, such as tumor delineation, that require such reference standards for evaluation. To obtain an understanding of the magnitude of the mpMRI-histology registration problem, we quantified the position, orientation and deformation of whole-mount histology sections relative to the formalin-fixed tissue slices from which they were cut. We found that (1) modeling isotropic scaling accounted for the majority of the deformation with a further small but statistically significant improvement from modeling affine transformation, and (2) due to the depth (mean±standard deviation (SD) 1.1±0.4 mm) and orientation (mean±SD 1.5±0.9°) of the sectioning, the assumption that histology sections are cut from the front faces of tissue slices, common in previous approaches, introduced a mean error of 0.7 mm. To determine the potential consequences of seemingly small registration errors such as described above, we investigated the impact of registration accuracy on the statistical power of imaging validation studies using a co-registered spatial reference standard (e.g. histology images) by deriving novel statistical power formulae that incorporate registration error. We illustrated, through a case study modeled on a prostate cancer imaging trial at our centre, that submillimeter differences in registration error can have a substantial impact on the required sample sizes (and therefore also the study cost) for studies aiming to detect mpMRI signal differences due to 0.5 – 2.0 cm3 prostate tumors. With the aim of achieving highly accurate mpMRI-histology registrations without disrupting the clinical pathology workflow, we developed a three-stage method for accurately registering 2D whole-mount histology images to pre-prostatectomy mpMRI that allowed flexible placement of cuts during slicing for pathology and avoided the assumption that histology sections are cut from the front faces of tissue slices. The method comprised a 3D reconstruction of histology images, followed by 3D–3D ex vivo–in vivo and in vivo–in vivo image transformations. The 3D reconstruction method minimized fiducial registration error between cross-sections of non-disruptive histology- and ex-vivo-MRI-visible strand-shaped fiducials to reconstruct histology images into the coordinate system of an ex vivo MR image. We quantified the mean±standard deviation target registration error of the reconstruction to be 0.7±0.4 mm, based on the post-reconstruction misalignment of intrinsic landmark pairs. We also compared our fiducial-based reconstruction to an alternative reconstruction based on mutual-information-based registration, an established method for multi-modality registration. We found that the mean target registration error for the fiducial-based method (0.7 mm) was lower than that for the mutual-information-based method (1.2 mm), and that the mutual-information-based method was less robust to initialization error due to multiple sources of error, including the optimizer and the mutual information similarity metric. The second stage of the histology–mpMRI registration used interactively defined 3D–3D deformable thin-plate-spline transformations to align ex vivo to in vivo MR images to compensate for deformation due to endorectal MR coil positioning, surgical resection and formalin fixation. The third stage used interactively defined 3D–3D rigid or thin-plate-spline transformations to co-register in vivo mpMRI images to compensate for patient motion and image distortion. The combined mean registration error of the histology–mpMRI registration was quantified to be 2 mm using manually identified intrinsic landmark pairs. Our data set, comprising mpMRI, target volumes contoured by four observers and co-registered contoured and graded histology images, was used to quantify the positive predictive values and variability of observer scoring of lesions following the Prostate Imaging Reporting and Data System (PI-RADS) guidelines, the variability of target volume contouring, and appropriate expansion margins from target volumes to achieve coverage of histologically defined cancer. The analysis of lesion scoring showed that a PI-RADS overall cancer likelihood of 5, denoting “highly likely cancer”, had a positive predictive value of 85% for Gleason 7 cancer (and 93% for lesions with volumes \u3e0.5 cm3 measured on mpMRI) and that PI-RADS scores were positively correlated with histological grade (ρ=0.6). However, the analysis also showed interobserver differences in PI-RADS score of 0.6 to 1.2 (on a 5-point scale) and an agreement kappa value of only 0.30. The analysis of target volume contouring showed that target volume contours with suitable margins can achieve near-complete histological coverage for detected lesions, despite the presence of high interobserver spatial variability in target volumes. Prostate cancer imaging and delineation have the potential to support multiple stages in the management of localized prostate cancer. Targeted biopsy procedures with optimized targeting based on tumor delineation may help distinguish patients who need treatment from those who need active surveillance. Ongoing monitoring of tumor burden based on delineation in patients undergoing active surveillance may help identify those who need to progress to therapy early while the cancer is still curable. Preferentially targeting therapies at delineated target volumes may lower the morbidity associated with aggressive cancer treatment and improve outcomes in low-intermediate-risk patients. Measurements of the accuracy and variability of lesion scoring and target volume contouring on mpMRI will clarify its value in supporting these roles

    Medical image registration and soft tissue deformation for image guided surgery system

    Get PDF
    In parallel with the developments in imaging modalities, image-guided surgery (IGS) can now provide the surgeon with high quality three-dimensional images depicting human anatomy. Although IGS is now in widely use in neurosurgery, there remain some limitations that must be overcome before it can be employed in more general minimally invasive procedures. In this thesis, we have developed several contributions to the field of medical image registration and brain tissue deformation modeling. From the methodology point of view, medical image registration algorithms can be classified into feature-based and intensity-based methods. One of the challenges faced by feature-based registration would be to determine which specific type of feature is desired for a given task and imaging type. For this reason, a point set registration using points and curves feature is proposed, which has the accuracy of registration based on points and the robustness of registration based on lines or curves. We have also tackled the problem on rigid registration of multimodal images using intensity-based similarity measures. Mutual information (MI) has emerged in recent years as a popular similarity metric and widely being recognized in the field of medical image registration. Unfortunately, it ignores the spatial information contained in the images such as edges and corners that might be useful in the image registration. We introduce a new similarity metric, called Adaptive Mutual Information (AMI) measure which incorporates the gradient spatial information. Salient pixels in the regions with high gradient value will contribute more in the estimation of mutual information of image pairs being registered. Experimental results showed that our proposed method improves registration accuracy and it is more robust to noise images which have large deviation from the reference image. Along with this direction, we further improve the technique to simultaneously use all information obtained from multiple features. Using multiple spatial features, the proposed algorithm is less sensitive to the effect of noise and some inherent variations, giving more accurate registration. Brain shift is a complex phenomenon and there are many different reasons causing brain deformation. We have investigated the pattern of brain deformation with respect to location and magnitude and to consider the implications of this pattern for correcting brain deformation in IGS systems. A computational finite element analysis was carried out to analyze the deformation and stress tensor experienced by the brain tissue during surgical operations. Finally, we have developed a prototype visualization display and navigation platform for interpretation of IGS. The system is based upon Qt (cross-platform GUI toolkit) and it integrates VTK (an object-oriented visualization library) as the rendering kernel. Based on the construction of a visualization software platform, we have laid a foundation on the future research to be extended to implement brain tissue deformation into the system

    Retrospective registration of tomographic brain images

    Get PDF
    In modern clinical practice, the clinician can make use of a vast array of specialized imaging techniques supporting diagnosis and treatment. For various reasons, the same anatomy of one patient is sometimes imaged more than once, either using the same imaging apparatus (monomodal acquisition ), or different ones (multimodal acquisition). To make simultaneous use of the acquired images, it is often necessary to bring these images in registration, i.e., to align their anatomical coordinate systems. The problem of medical image registration as concerns human brain images is addressed in this thesis. The specific chapters include a survey of recent literature, CT/MR registration using mathematical image features (edges and ridges), monomodal SPECT registration, and CT/MR/SPECT/PET registration using image features extracted by the use of mathematical morphology

    Medical Image Registration Using Deep Neural Networks

    Get PDF
    Registration is a fundamental problem in medical image analysis wherein images are transformed spatially to align corresponding anatomical structures in each image. Recently, the development of learning-based methods, which exploit deep neural networks and can outperform classical iterative methods, has received considerable interest from the research community. This interest is due in part to the substantially reduced computational requirements that learning-based methods have during inference, which makes them particularly well-suited to real-time registration applications. Despite these successes, learning-based methods can perform poorly when applied to images from different modalities where intensity characteristics can vary greatly, such as in magnetic resonance and ultrasound imaging. Moreover, registration performance is often demonstrated on well-curated datasets, closely matching the distribution of the training data. This makes it difficult to determine whether demonstrated performance accurately represents the generalization and robustness required for clinical use. This thesis presents learning-based methods which address the aforementioned difficulties by utilizing intuitive point-set-based representations, user interaction and meta-learning-based training strategies. Primarily, this is demonstrated with a focus on the non-rigid registration of 3D magnetic resonance imaging to sparse 2D transrectal ultrasound images to assist in the delivery of targeted prostate biopsies. While conventional systematic prostate biopsy methods can require many samples to be taken to confidently produce a diagnosis, tumor-targeted approaches have shown improved patient, diagnostic, and disease management outcomes with fewer samples. However, the available intraoperative transrectal ultrasound imaging alone is insufficient for accurate targeted guidance. As such, this exemplar application is used to illustrate the effectiveness of sparse, interactively-acquired ultrasound imaging for real-time, interventional registration. The presented methods are found to improve registration accuracy, relative to state-of-the-art, with substantially lower computation time and require a fraction of the data at inference. As a result, these methods are particularly attractive given their potential for real-time registration in interventional applications
    • 

    corecore