46 research outputs found

    Medical Image Registration Using Deep Neural Networks

    Get PDF
    Registration is a fundamental problem in medical image analysis wherein images are transformed spatially to align corresponding anatomical structures in each image. Recently, the development of learning-based methods, which exploit deep neural networks and can outperform classical iterative methods, has received considerable interest from the research community. This interest is due in part to the substantially reduced computational requirements that learning-based methods have during inference, which makes them particularly well-suited to real-time registration applications. Despite these successes, learning-based methods can perform poorly when applied to images from different modalities where intensity characteristics can vary greatly, such as in magnetic resonance and ultrasound imaging. Moreover, registration performance is often demonstrated on well-curated datasets, closely matching the distribution of the training data. This makes it difficult to determine whether demonstrated performance accurately represents the generalization and robustness required for clinical use. This thesis presents learning-based methods which address the aforementioned difficulties by utilizing intuitive point-set-based representations, user interaction and meta-learning-based training strategies. Primarily, this is demonstrated with a focus on the non-rigid registration of 3D magnetic resonance imaging to sparse 2D transrectal ultrasound images to assist in the delivery of targeted prostate biopsies. While conventional systematic prostate biopsy methods can require many samples to be taken to confidently produce a diagnosis, tumor-targeted approaches have shown improved patient, diagnostic, and disease management outcomes with fewer samples. However, the available intraoperative transrectal ultrasound imaging alone is insufficient for accurate targeted guidance. As such, this exemplar application is used to illustrate the effectiveness of sparse, interactively-acquired ultrasound imaging for real-time, interventional registration. The presented methods are found to improve registration accuracy, relative to state-of-the-art, with substantially lower computation time and require a fraction of the data at inference. As a result, these methods are particularly attractive given their potential for real-time registration in interventional applications

    3D fusion of histology to multi-parametric MRI for prostate cancer imaging evaluation and lesion-targeted treatment planning

    Get PDF
    Multi-parametric magnetic resonance imaging (mpMRI) of localized prostate cancer has the potential to support detection, staging and localization of tumors, as well as selection, delivery and monitoring of treatments. Delineating prostate cancer tumors on imaging could potentially further support the clinical workflow by enabling precise monitoring of tumor burden in active-surveillance patients, optimized targeting of image-guided biopsies, and targeted delivery of treatments to decrease morbidity and improve outcomes. Evaluating the performance of mpMRI for prostate cancer imaging and delineation ideally includes comparison to an accurately registered reference standard, such as prostatectomy histology, for the locations of tumor boundaries on mpMRI. There are key gaps in knowledge regarding how to accurately register histological reference standards to imaging, and consequently further gaps in knowledge regarding the suitability of mpMRI for tasks, such as tumor delineation, that require such reference standards for evaluation. To obtain an understanding of the magnitude of the mpMRI-histology registration problem, we quantified the position, orientation and deformation of whole-mount histology sections relative to the formalin-fixed tissue slices from which they were cut. We found that (1) modeling isotropic scaling accounted for the majority of the deformation with a further small but statistically significant improvement from modeling affine transformation, and (2) due to the depth (mean±standard deviation (SD) 1.1±0.4 mm) and orientation (mean±SD 1.5±0.9°) of the sectioning, the assumption that histology sections are cut from the front faces of tissue slices, common in previous approaches, introduced a mean error of 0.7 mm. To determine the potential consequences of seemingly small registration errors such as described above, we investigated the impact of registration accuracy on the statistical power of imaging validation studies using a co-registered spatial reference standard (e.g. histology images) by deriving novel statistical power formulae that incorporate registration error. We illustrated, through a case study modeled on a prostate cancer imaging trial at our centre, that submillimeter differences in registration error can have a substantial impact on the required sample sizes (and therefore also the study cost) for studies aiming to detect mpMRI signal differences due to 0.5 – 2.0 cm3 prostate tumors. With the aim of achieving highly accurate mpMRI-histology registrations without disrupting the clinical pathology workflow, we developed a three-stage method for accurately registering 2D whole-mount histology images to pre-prostatectomy mpMRI that allowed flexible placement of cuts during slicing for pathology and avoided the assumption that histology sections are cut from the front faces of tissue slices. The method comprised a 3D reconstruction of histology images, followed by 3D–3D ex vivo–in vivo and in vivo–in vivo image transformations. The 3D reconstruction method minimized fiducial registration error between cross-sections of non-disruptive histology- and ex-vivo-MRI-visible strand-shaped fiducials to reconstruct histology images into the coordinate system of an ex vivo MR image. We quantified the mean±standard deviation target registration error of the reconstruction to be 0.7±0.4 mm, based on the post-reconstruction misalignment of intrinsic landmark pairs. We also compared our fiducial-based reconstruction to an alternative reconstruction based on mutual-information-based registration, an established method for multi-modality registration. We found that the mean target registration error for the fiducial-based method (0.7 mm) was lower than that for the mutual-information-based method (1.2 mm), and that the mutual-information-based method was less robust to initialization error due to multiple sources of error, including the optimizer and the mutual information similarity metric. The second stage of the histology–mpMRI registration used interactively defined 3D–3D deformable thin-plate-spline transformations to align ex vivo to in vivo MR images to compensate for deformation due to endorectal MR coil positioning, surgical resection and formalin fixation. The third stage used interactively defined 3D–3D rigid or thin-plate-spline transformations to co-register in vivo mpMRI images to compensate for patient motion and image distortion. The combined mean registration error of the histology–mpMRI registration was quantified to be 2 mm using manually identified intrinsic landmark pairs. Our data set, comprising mpMRI, target volumes contoured by four observers and co-registered contoured and graded histology images, was used to quantify the positive predictive values and variability of observer scoring of lesions following the Prostate Imaging Reporting and Data System (PI-RADS) guidelines, the variability of target volume contouring, and appropriate expansion margins from target volumes to achieve coverage of histologically defined cancer. The analysis of lesion scoring showed that a PI-RADS overall cancer likelihood of 5, denoting “highly likely cancer”, had a positive predictive value of 85% for Gleason 7 cancer (and 93% for lesions with volumes \u3e0.5 cm3 measured on mpMRI) and that PI-RADS scores were positively correlated with histological grade (ρ=0.6). However, the analysis also showed interobserver differences in PI-RADS score of 0.6 to 1.2 (on a 5-point scale) and an agreement kappa value of only 0.30. The analysis of target volume contouring showed that target volume contours with suitable margins can achieve near-complete histological coverage for detected lesions, despite the presence of high interobserver spatial variability in target volumes. Prostate cancer imaging and delineation have the potential to support multiple stages in the management of localized prostate cancer. Targeted biopsy procedures with optimized targeting based on tumor delineation may help distinguish patients who need treatment from those who need active surveillance. Ongoing monitoring of tumor burden based on delineation in patients undergoing active surveillance may help identify those who need to progress to therapy early while the cancer is still curable. Preferentially targeting therapies at delineated target volumes may lower the morbidity associated with aggressive cancer treatment and improve outcomes in low-intermediate-risk patients. Measurements of the accuracy and variability of lesion scoring and target volume contouring on mpMRI will clarify its value in supporting these roles

    Real-time Prostate Motion Tracking For Robot-assisted Laparoscopic Radical Prostatectomy

    Get PDF
    Radical prostatectomy surgery (RP) is the gold standard for treatment of localized prostate cancer (PCa). Recently, emergence of minimally invasive techniques such as Laparoscopic Radical Prostatectomy (LRP) and Robot-Assisted Laparoscopic Radical Prostatectomy (RARP) has improved the outcomes for prostatectomy. However, it remains difficult for surgeons to make informed decisions regarding resection margins and nerve sparing since the location of the tumour within the organ is not usually visible in a laparoscopic view. While MRI enables visualization of the salient structures and cancer foci, its efficacy in LRP is reduced unless it is fused into a stereoscopic view such that homologous structures overlap. Registration of the MRI image and peri-operative ultrasound image either via visual manual alignment or using a fully automated registration can potentially be exploited to bring the pre-operative information into alignment with the patient coordinate system at the beginning of the procedure. While doing so, prostate motion needs to be compensated in real-time to synchronize the stereoscopic view with the pre-operative MRI during the prostatectomy procedure. In this thesis, two tracking methods are proposed to assess prostate rigid rotation and translation for the prostatectomy. The first method presents a 2D-to-3D point-to-line registration algorithm to measure prostate motion and translation with respect to an initial 3D TRUS image. The second method investigates a point-based stereoscopic tracking technique to compensate for rigid prostate motion so that the same motion can be applied to the pre-operative images

    Validation Strategies Supporting Clinical Integration of Prostate Segmentation Algorithms for Magnetic Resonance Imaging

    Get PDF
    Segmentation of the prostate in medical images is useful for prostate cancer diagnosis and therapy guidance. However, manual segmentation of the prostate is laborious and time-consuming, with inter-observer variability. The focus of this thesis was on accuracy, reproducibility and procedure time measurement for prostate segmentation on T2-weighted endorectal magnetic resonance imaging, and assessment of the potential of a computer-assisted segmentation technique to be translated to clinical practice for prostate cancer management. We collected an image data set from prostate cancer patients with manually-delineated prostate borders by one observer on all the images and by two other observers on a subset of images. We used a complementary set of error metrics to measure the different types of observed segmentation errors. We compared expert manual segmentation as well as semi-automatic and automatic segmentation approaches before and after manual editing by expert physicians. We recorded the time needed for user interaction to initialize the semi-automatic algorithm, algorithm execution, and manual editing as necessary. Comparing to manual segmentation, the measured errors for the algorithms compared favourably with observed differences between manual segmentations. The measured average editing times for the computer-assisted segmentation were lower than fully manual segmentation time, and the algorithms reduced the inter-observer variability as compared to manual segmentation. The accuracy of the computer-assisted approaches was near to or within the range of observed variability in manual segmentation. The recorded procedure time for prostate segmentation was reduced using computer-assisted segmentation followed by manual editing, compared to the time required for fully manual segmentation

    New Technology and Techniques for Needle-Based Magnetic Resonance Image-Guided Prostate Focal Therapy

    Get PDF
    The most common diagnosis of prostate cancer is that of localized disease, and unfortunately the optimal type of treatment for these men is not yet certain. Magnetic resonance image (MRI)-guided focal laser ablation (FLA) therapy is a promising potential treatment option for select men with localized prostate cancer, and may result in fewer side effects than whole-gland therapies, while still achieving oncologic control. The objective of this thesis was to develop methods of accurately guiding needles to the prostate within the bore of a clinical MRI scanner for MRI-guided FLA therapy. To achieve this goal, a mechatronic needle guidance system was developed. The system enables precise targeting of prostate tumours through angulated trajectories and insertion of needles with the patient in the bore of a clinical MRI scanner. After confirming sufficient accuracy in phantoms, and good MRI-compatibility, the system was used to guide needles for MRI-guided FLA therapy in eight patients. Results from this case series demonstrated an improvement in needle guidance time and ease of needle delivery compared to conventional approaches. Methods of more reliable treatment planning were sought, leading to the development of a systematic treatment planning method, and Monte Carlo simulations of needle placement uncertainty. The result was an estimate of the maximum size of focal target that can be confidently ablated using the mechatronic needle guidance system, leading to better guidelines for patient eligibility. These results also quantified the benefit that could be gained with improved techniques for needle guidance

    Medical image registration using unsupervised deep neural network: A scoping literature review

    Full text link
    In medicine, image registration is vital in image-guided interventions and other clinical applications. However, it is a difficult subject to be addressed which by the advent of machine learning, there have been considerable progress in algorithmic performance has recently been achieved for medical image registration in this area. The implementation of deep neural networks provides an opportunity for some medical applications such as conducting image registration in less time with high accuracy, playing a key role in countering tumors during the operation. The current study presents a comprehensive scoping review on the state-of-the-art literature of medical image registration studies based on unsupervised deep neural networks is conducted, encompassing all the related studies published in this field to this date. Here, we have tried to summarize the latest developments and applications of unsupervised deep learning-based registration methods in the medical field. Fundamental and main concepts, techniques, statistical analysis from different viewpoints, novelties, and future directions are elaborately discussed and conveyed in the current comprehensive scoping review. Besides, this review hopes to help those active readers, who are riveted by this field, achieve deep insight into this exciting field

    Fusion of magnetic resonance and ultrasound images for endometriosis detection

    Get PDF
    Endometriosis is a gynecologic disorder that typically affects women in their reproductive age and is associated with chronic pelvic pain and infertility. In the context of pre-operative diagnosis and guided surgery, endometriosis is a typical example of pathology that requires the use of both magnetic resonance (MR) and ultrasound (US) modalities. These modalities are used side by sidebecause they contain complementary information. However, MRI and US images have different spatial resolutions, fields of view and contrasts and are corrupted by different kinds of noise, which results in important challenges related to their analysis by radiologists. The fusion of MR and US images is a way of facilitating the task of medical experts and improve the pre-operative diagnosis and the surgery mapping. The object of this PhD thesis is to propose a new automatic fusion method for MRI and US images. First, we assume that the MR and US images to be fused are aligned, i.e., there is no geometric distortion between these images. We propose a fusion method for MR and US images, which aims at combining the advantages of each modality, i.e., good contrast and signal to noise ratio for the MR image and good spatial resolution for the US image. The proposed algorithm is based on an inverse problem, performing a super-resolution of the MR image and a denoising of the US image. A polynomial function is introduced to modelthe relationships between the gray levels of the MR and US images. However, the proposed fusion method is very sensitive to registration errors. Thus, in a second step, we introduce a joint fusion and registration method for MR and US images. Registration is a complicated task in practical applications. The proposed MR/US image fusion performs jointly super-resolution of the MR image and despeckling of the US image, and is able to automatically account for registration errors. A polynomial function is used to link ultrasound and MR images in the fusion process while an appropriate similarity measure is introduced to handle the registration problem. The proposed registration is based on a non-rigid transformation containing a local elastic B-spline model and a global affine transformation. The fusion and registration operations are performed alternatively simplifying the underlying optimization problem. The interest of the joint fusion and registration is analyzed using synthetic and experimental phantom images

    Deep learning applications in the prostate cancer diagnostic pathway

    Get PDF
    Prostate cancer (PCa) is the second most frequently diagnosed cancer in men worldwide and the fifth leading cause of cancer death in men, with an estimated 1.4 million new cases in 2020 and 375,000 deaths. The risk factors most strongly associated to PCa are advancing age, family history, race, and mutations of the BRCA genes. Since the aforementioned risk factors are not preventable, early and accurate diagnoses are a key objective of the PCa diagnostic pathway. In the UK, clinical guidelines recommend multiparametric magnetic resonance imaging (mpMRI) of the prostate for use by radiologists to detect, score, and stage lesions that may correspond to clinically significant PCa (CSPCa), prior to confirmatory biopsy and histopathological grading. Computer-aided diagnosis (CAD) of PCa using artificial intelligence algorithms holds a currently unrealized potential to improve upon the diagnostic accuracy achievable by radiologist assessment of mpMRI, improve the reporting consistency between radiologists, and reduce reporting time. In this thesis, we build and evaluate deep learning-based CAD systems for the PCa diagnostic pathway, which address gaps identified in the literature. First, we introduce a novel patient-level classification framework, PCF, which uses a stacked ensemble of convolutional neural networks (CNNs) and support vector machines (SVMs) to assign a probability of having CSPCa to patients, using mpMRI and clinical features. Second, we introduce AutoProstate, a deep-learning powered framework for automated PCa assessment and reporting; AutoProstate utilizes biparametric MRI and clinical data to populate an automatic diagnostic report containing segmentations of the whole prostate, prostatic zones, and candidate CSPCa lesions, as well as several derived characteristics that are clinically valuable. Finally, as automatic segmentation algorithms have not yet reached the desired robustness for clinical use, we introduce interactive click-based segmentation applications for the whole prostate and prostatic lesions, with potential uses in diagnosis, active surveillance progression monitoring, and treatment planning
    corecore