711 research outputs found

    Real-time multimodal image registration with partial intraoperative point-set data

    Get PDF
    We present Free Point Transformer (FPT) - a deep neural network architecture for non-rigid point-set registration. Consisting of two modules, a global feature extraction module and a point transformation module, FPT does not assume explicit constraints based on point vicinity, thereby overcoming a common requirement of previous learning-based point-set registration methods. FPT is designed to accept unordered and unstructured point-sets with a variable number of points and uses a "model-free" approach without heuristic constraints. Training FPT is flexible and involves minimizing an intuitive unsupervised loss function, but supervised, semi-supervised, and partially- or weakly-supervised training are also supported. This flexibility makes FPT amenable to multimodal image registration problems where the ground-truth deformations are difficult or impossible to measure. In this paper, we demonstrate the application of FPT to non-rigid registration of prostate magnetic resonance (MR) imaging and sparsely-sampled transrectal ultrasound (TRUS) images. The registration errors were 4.71 mm and 4.81 mm for complete TRUS imaging and sparsely-sampled TRUS imaging, respectively. The results indicate superior accuracy to the alternative rigid and non-rigid registration algorithms tested and substantially lower computation time. The rapid inference possible with FPT makes it particularly suitable for applications where real-time registration is beneficial

    Medical Image Registration Using Deep Neural Networks

    Get PDF
    Registration is a fundamental problem in medical image analysis wherein images are transformed spatially to align corresponding anatomical structures in each image. Recently, the development of learning-based methods, which exploit deep neural networks and can outperform classical iterative methods, has received considerable interest from the research community. This interest is due in part to the substantially reduced computational requirements that learning-based methods have during inference, which makes them particularly well-suited to real-time registration applications. Despite these successes, learning-based methods can perform poorly when applied to images from different modalities where intensity characteristics can vary greatly, such as in magnetic resonance and ultrasound imaging. Moreover, registration performance is often demonstrated on well-curated datasets, closely matching the distribution of the training data. This makes it difficult to determine whether demonstrated performance accurately represents the generalization and robustness required for clinical use. This thesis presents learning-based methods which address the aforementioned difficulties by utilizing intuitive point-set-based representations, user interaction and meta-learning-based training strategies. Primarily, this is demonstrated with a focus on the non-rigid registration of 3D magnetic resonance imaging to sparse 2D transrectal ultrasound images to assist in the delivery of targeted prostate biopsies. While conventional systematic prostate biopsy methods can require many samples to be taken to confidently produce a diagnosis, tumor-targeted approaches have shown improved patient, diagnostic, and disease management outcomes with fewer samples. However, the available intraoperative transrectal ultrasound imaging alone is insufficient for accurate targeted guidance. As such, this exemplar application is used to illustrate the effectiveness of sparse, interactively-acquired ultrasound imaging for real-time, interventional registration. The presented methods are found to improve registration accuracy, relative to state-of-the-art, with substantially lower computation time and require a fraction of the data at inference. As a result, these methods are particularly attractive given their potential for real-time registration in interventional applications

    Multimodal T2w and DWI Prostate Gland Automated Registration

    Get PDF
    Multiparametric magnetic resonance imaging (mpMRI) is emerging as a promising tool in the clinical pathway of prostate cancer (PCa). The registration between a structural and a functional imaging modality, such as T2-weighted (T2w) and diffusion-weighted imaging (DWI) is fundamental in the development of a mpMRI-based computer aided diagnosis (CAD) system for PCa. Here, we propose an automated method to register the prostate gland in T2w and DWI image sequences by a landmark-based affine registration and a non-parametric diffeomorphic registration. An expert operator manually segmented the prostate gland in both modalities on a dataset of 20 patients. Target registration error and Jaccard index, which measures the overlap between masks, were evaluated pre-and post-registration resulting in an improvement of 44% and 21%, respectively. In the future, the proposed method could be useful in the framework of a CAD system for PCa detection and characterization in mpMRI

    MR to Ultrasound Registration for Image-Guided Prostate Biopsy

    Get PDF
    Transrectal ultrasound (TRUS) guided prostate biopsy is the standard approach for diagnosis of prostate cancer (PCa). However, due to the lack of image contrast of prostate tumors, it often results in false negatives. Magnetic Resonance Imaging (MRI) has been considered to be a promising imaging modality for noninvasive identiļ¬cation of PCa, since it can provide a high sensitivity and speciļ¬city for the detection of early stage PCa. Our main objective is to develop a registration method of 3D MR-TRUS images, allowing generation of volumetric 3D maps of targets identiļ¬ed in 3D MR images to be biopsied using 3D TRUS images. We proposed an image-based non-rigid registration approach which employs the modality independent neighborhood descriptor (MIND) as the local similarity feature. An eļ¬ƒcient duality-based convex optimization-based algorithmic scheme was introduced to extract the deformations. The registration accuracy was evaluated using 20 patient images by calculating the target registration error (TRE) using manually identiļ¬ed corresponding intrinsic ļ¬ducials. Additional performance metrics (DSC, MAD, and MAXD) were also calculated by comparing the MR and TRUS manually segmented prostate surfaces in the registered images. Experimental results showed that the proposed method yielded an overall median TRE of 1.76 mm. In addition, we proposed a surface-based registration method, which ļ¬rst makes use of an initial rigid registration of 3D MR to TRUS using 6 manually placed corresponding landmarks in each image. Following the manual initialization, two prostate surfaces are segmented from 3D MR and TRUS images and then non-rigidly registered using a thin-plate spline algorithm. The registration accuracy was evaluated using 17 patient images by measuring TRE. Experimental results show that the proposed method yielded an overall mean TRE of 2.24 mm, which is favorably comparable to a clinical requirement for an error of less than 2.5 mm

    Medical image registration using unsupervised deep neural network: A scoping literature review

    Full text link
    In medicine, image registration is vital in image-guided interventions and other clinical applications. However, it is a difficult subject to be addressed which by the advent of machine learning, there have been considerable progress in algorithmic performance has recently been achieved for medical image registration in this area. The implementation of deep neural networks provides an opportunity for some medical applications such as conducting image registration in less time with high accuracy, playing a key role in countering tumors during the operation. The current study presents a comprehensive scoping review on the state-of-the-art literature of medical image registration studies based on unsupervised deep neural networks is conducted, encompassing all the related studies published in this field to this date. Here, we have tried to summarize the latest developments and applications of unsupervised deep learning-based registration methods in the medical field. Fundamental and main concepts, techniques, statistical analysis from different viewpoints, novelties, and future directions are elaborately discussed and conveyed in the current comprehensive scoping review. Besides, this review hopes to help those active readers, who are riveted by this field, achieve deep insight into this exciting field
    • ā€¦
    corecore