77 research outputs found
Label-driven weakly-supervised learning for multimodal deformable image registration
Spatially aligning medical images from different modalities remains a
challenging task, especially for intraoperative applications that require fast
and robust algorithms. We propose a weakly-supervised, label-driven formulation
for learning 3D voxel correspondence from higher-level label correspondence,
thereby bypassing classical intensity-based image similarity measures. During
training, a convolutional neural network is optimised by outputting a dense
displacement field (DDF) that warps a set of available anatomical labels from
the moving image to match their corresponding counterparts in the fixed image.
These label pairs, including solid organs, ducts, vessels, point landmarks and
other ad hoc structures, are only required at training time and can be
spatially aligned by minimising a cross-entropy function of the warped moving
label and the fixed label. During inference, the trained network takes a new
image pair to predict an optimal DDF, resulting in a fully-automatic,
label-free, real-time and deformable registration. For interventional
applications where large global transformation prevails, we also propose a
neural network architecture to jointly optimise the global- and local
displacements. Experiment results are presented based on cross-validating
registrations of 111 pairs of T2-weighted magnetic resonance images and 3D
transrectal ultrasound images from prostate cancer patients with a total of
over 4000 anatomical labels, yielding a median target registration error of 4.2
mm on landmark centroids and a median Dice of 0.88 on prostate glands.Comment: Accepted to ISBI 201
Real-time multimodal image registration with partial intraoperative point-set data
We present Free Point Transformer (FPT) - a deep neural network architecture for non-rigid point-set registration. Consisting of two modules, a global feature extraction module and a point transformation module, FPT does not assume explicit constraints based on point vicinity, thereby overcoming a common requirement of previous learning-based point-set registration methods. FPT is designed to accept unordered and unstructured point-sets with a variable number of points and uses a "model-free" approach without heuristic constraints. Training FPT is flexible and involves minimizing an intuitive unsupervised loss function, but supervised, semi-supervised, and partially- or weakly-supervised training are also supported. This flexibility makes FPT amenable to multimodal image registration problems where the ground-truth deformations are difficult or impossible to measure. In this paper, we demonstrate the application of FPT to non-rigid registration of prostate magnetic resonance (MR) imaging and sparsely-sampled transrectal ultrasound (TRUS) images. The registration errors were 4.71 mm and 4.81 mm for complete TRUS imaging and sparsely-sampled TRUS imaging, respectively. The results indicate superior accuracy to the alternative rigid and non-rigid registration algorithms tested and substantially lower computation time. The rapid inference possible with FPT makes it particularly suitable for applications where real-time registration is beneficial
Prostate biopsy tracking with deformation estimation
Transrectal biopsies under 2D ultrasound (US) control are the current
clinical standard for prostate cancer diagnosis. The isoechogenic nature of
prostate carcinoma makes it necessary to sample the gland systematically,
resulting in a low sensitivity. Also, it is difficult for the clinician to
follow the sampling protocol accurately under 2D US control and the exact
anatomical location of the biopsy cores is unknown after the intervention.
Tracking systems for prostate biopsies make it possible to generate biopsy
distribution maps for intra- and post-interventional quality control and 3D
visualisation of histological results for diagnosis and treatment planning.
They can also guide the clinician toward non-ultrasound targets. In this paper,
a volume-swept 3D US based tracking system for fast and accurate estimation of
prostate tissue motion is proposed. The entirely image-based system solves the
patient motion problem with an a priori model of rectal probe kinematics.
Prostate deformations are estimated with elastic registration to maximize
accuracy. The system is robust with only 17 registration failures out of 786
(2%) biopsy volumes acquired from 47 patients during biopsy sessions. Accuracy
was evaluated to 0.760.52mm using manually segmented fiducials on 687
registered volumes stemming from 40 patients. A clinical protocol for assisted
biopsy acquisition was designed and implemented as a biopsy assistance system,
which allows to overcome the draw-backs of the standard biopsy procedure.Comment: Medical Image Analysis (2011) epub ahead of prin
Medical Image Registration Using Deep Neural Networks
Registration is a fundamental problem in medical image analysis wherein images are transformed spatially to align corresponding anatomical structures in each image. Recently, the development of learning-based methods, which exploit deep neural networks and can outperform classical iterative methods, has received considerable interest from the research community. This interest is due in part to the substantially reduced computational requirements that learning-based methods have during inference, which makes them particularly well-suited to real-time registration applications. Despite these successes, learning-based methods can perform poorly when applied to images from different modalities where intensity characteristics can vary greatly, such as in magnetic resonance and ultrasound imaging. Moreover, registration performance is often demonstrated on well-curated datasets, closely matching the distribution of the training data. This makes it difficult to determine whether demonstrated performance accurately represents the generalization and robustness required for clinical use.
This thesis presents learning-based methods which address the aforementioned difficulties by utilizing intuitive point-set-based representations, user interaction and meta-learning-based training strategies. Primarily, this is demonstrated with a focus on the non-rigid registration of 3D magnetic resonance imaging to sparse 2D transrectal ultrasound images to assist in the delivery of targeted prostate biopsies. While conventional systematic prostate biopsy methods can require many samples to be taken to confidently produce a diagnosis, tumor-targeted approaches have shown improved patient, diagnostic, and disease management outcomes with fewer samples. However, the available intraoperative transrectal ultrasound imaging alone is insufficient for accurate targeted guidance. As such, this exemplar application is used to illustrate the effectiveness of sparse, interactively-acquired ultrasound imaging for real-time, interventional registration. The presented methods are found to improve registration accuracy, relative to state-of-the-art, with substantially lower computation time and require a fraction of the data at inference. As a result, these methods are particularly attractive given their potential for real-time registration in interventional applications
Deformable MRI to Transrectal Ultrasound Registration for Prostate Interventions Using Deep Learning
RÉSUMÉ: Le cancer de la prostate est l’un des principaux problèmes de santé publique dans le monde. Un diagnostic précoce du cancer de la prostate pourrait jouer un rôle vital dans le traitement des patients. Les procédures de biopsie sont utilisées à des fins de diagnostic. À cet égard, l’échographie transrectale (TRUS) est considérée comme un standard pour l’imagerie de la prostate lors d’une biopsie ou d’une curiethérapie. Cette technique d’imagerie est relativement peu coûteuse, peut scanner l’organe en temps réel et est sans radiation. Ainsi, les scans TRUS
sont utilisés pour guider les cliniciens sur l’emplacement d’une tumeur à l’intérieur de la prostate. Le défi majeur réside dans le fait que les images TRUS ont une faible résolution et qualité d’image. Il est difficile de distinguer l’emplacement exact de la tumeur et l’étendue de
la maladie. De plus, l’organe de la prostate subit d’importantes variations de forme au cours d’une intervention de la prostate, ce qui rend l’identification de la tumeur encore plus difficile.----------ABSTRACT: Prostate cancer is one of the major public health issues in the world. An accurate and early diagnosis of prostate cancer could play a vital role in the treatment of patients. Biopsy procedures are used for diagnosis purposes. In this regard, Transrectal Ultrasound (TRUS) is considered a standard for imaging the prostate during a biopsy or brachytherapy procedure. This imaging technique is comparatively low-cost, can scan the organ in real-time, and is radiation free. Thus, TRUS scans are used to guide the clinicians about the location of a tumor inside the prostate organ. The major challenge lies in the fact that TRUS images have low resolution and quality. This makes it difficult to distinguish the exact tumor location and the extent of the disease. In addition, the prostate organ undergoes important shape variations during a prostate intervention procedure, which makes the tumor identification even harder
Meta-Learning Initializations for Interactive Medical Image Registration
We present a meta-learning framework for interactive medical image
registration. Our proposed framework comprises three components: a
learning-based medical image registration algorithm, a form of user interaction
that refines registration at inference, and a meta-learning protocol that
learns a rapidly adaptable network initialization. This paper describes a
specific algorithm that implements the registration, interaction and
meta-learning protocol for our exemplar clinical application: registration of
magnetic resonance (MR) imaging to interactively acquired, sparsely-sampled
transrectal ultrasound (TRUS) images. Our approach obtains comparable
registration error (4.26 mm) to the best-performing non-interactive
learning-based 3D-to-3D method (3.97 mm) while requiring only a fraction of the
data, and occurring in real-time during acquisition. Applying sparsely sampled
data to non-interactive methods yields higher registration errors (6.26 mm),
demonstrating the effectiveness of interactive MR-TRUS registration, which may
be applied intraoperatively given the real-time nature of the adaptation
process.Comment: 11 pages, 10 figures. Paper accepted to IEEE Transactions on Medical
Imaging (October 26 2022
Registration of magnetic resonance and ultrasound images for guiding prostate cancer interventions
Prostate cancer is a major international health problem with a large and rising incidence in many parts of the world. Transrectal ultrasound (TRUS) imaging is used routinely to guide surgical procedures, such as needle biopsy and a number of minimally-invasive therapies, but its limited ability to visualise prostate cancer is widely recognised. Magnetic resonance (MR) imaging techniques, on the other hand, have recently been developed that can provide clinically useful diagnostic information. Registration (or alignment) of MR and TRUS images during TRUS-guided surgical interventions potentially provides a cost-effective approach to augment TRUS images with clinically useful, MR-derived information (for example, tumour location, shape and size). This thesis describes a deformable image registration framework that enables automatic and/or semi-automatic alignment of MR and 3D TRUS images of the prostate gland. The method combines two technical developments in the field: First, a method for constructing patient-specific statistical shape models of prostate motion/deformation, based on learning from finite element simulations of gland motion using geometric data from a preoperative MR image, is proposed. Second, a novel “model-to-image” registration framework is developed to register this statistical shape model automatically to an intraoperative TRUS image. This registration approach is implemented using a novel model-to-image vector alignment (MIVA) algorithm, which maximises the likelihood of a particular instance of a statistical shape model given a voxel-intensity-based feature vector that represents an estimate of the surface normal vectors at the boundary of the organ in question. Using real patient data, the MR-TRUS registration accuracy of the new algorithm is validated using intra-prostatic anatomical landmarks. A rigorous and extensive validation analysis is also provided for assessing the image registration experiments. The final target registration error after performing 100 MR–TRUS registrations for each patient have a median of 2.40 mm, meaning that over 93% registrations may successfully hit the target representing a clinically significant lesion. The implemented registration algorithms took less than 30 seconds and 2 minutes for manually defined point- and normal vector features, respectively. The thesis concludes with a summary of potential applications and future research directions
Image-based registration methods for quantification and compensation of prostate motion during trans-rectal ultrasound (TRUS)-guided biopsy
Prostate biopsy is the clinical standard for cancer diagnosis and is typically performed under two-dimensional (2D) transrectal ultrasound (TRUS) for needle guidance. Unfortunately, most early stage prostate cancers are not visible on ultrasound and the procedure suffers from high false negative rates due to the lack of visible targets. Fusion of pre-biopsy MRI to 3D TRUS for targeted biopsy could improve cancer detection rates and volume of tumor sampled. In MRI-TRUS fusion biopsy systems, patient or prostate motion during the procedure causes misalignments in the MR targets mapped to the live 2D TRUS images, limiting the targeting accuracy of the biopsy system.
In order to sample smallest clinically significant tumours of 0.5 cm3with 95% confidence, the root mean square (RMS) error of the biopsy system needs to be
The target misalignments due to intermittent prostate motion during the procedure can be compensated by registering the live 2D TRUS images acquired during the biopsy procedure to the pre-acquired baseline 3D TRUS image. The registration must be performed both accurately and quickly in order to be useful during the clinical procedure. We developed an intensity-based 2D-3D rigid registration algorithm and validated it by calculating the target registration error (TRE) using manually identified fiducials within the prostate. We discuss two different approaches that can be used to improve the robustness of this registration to meet the clinical requirements. Firstly, we evaluated the impact of intra-procedural 3D TRUS imaging on motion compensation accuracy since the limited anatomical context available in live 2D TRUS images could limit the robustness of the 2D-3D registration. The results indicated that TRE improved when intra-procedural 3D TRUS images were used in registration, with larger improvements in the base and apex regions as compared with the mid-gland region. Secondly, we developed and evaluated a registration algorithm whose optimization is based on learned prostate motion characteristics. Compared to our initial approach, the updated optimization improved the robustness during 2D-3D registration by reducing the number of registrations with a TRE \u3e 5 mm from 9.2% to 1.2% with an overall RMS TRE of 2.3 mm.
The methods developed in this work were intended to improve the needle targeting accuracy of 3D TRUS-guided biopsy systems. The successful integration of the techniques into current 3D TRUS-guided systems could improve the overall cancer detection rate during the biopsy and help to achieve earlier diagnosis and fewer repeat biopsy procedures in prostate cancer diagnosis
Segmentation of pelvic structures from preoperative images for surgical planning and guidance
Prostate cancer is one of the most frequently diagnosed malignancies globally and the second leading cause of cancer-related mortality in males in the developed world. In recent decades, many techniques have been proposed for prostate cancer diagnosis and treatment. With the development of imaging technologies such as CT and MRI, image-guided procedures have become increasingly important as a means to improve clinical outcomes. Analysis of the preoperative images and construction of 3D models prior to treatment would help doctors to better localize and visualize the structures of interest, plan the procedure, diagnose disease and guide the surgery or therapy. This requires efficient and robust medical image analysis and segmentation technologies to be developed.
The thesis mainly focuses on the development of segmentation techniques in pelvic MRI for image-guided robotic-assisted laparoscopic radical prostatectomy and external-beam radiation therapy. A fully automated multi-atlas framework is proposed for bony pelvis segmentation in MRI, using the guidance of MRI AE-SDM. With the guidance of the AE-SDM, a multi-atlas segmentation algorithm is used to delineate the bony pelvis in a new \ac{MRI} where there is no CT available. The proposed technique outperforms state-of-the-art algorithms for MRI bony pelvis segmentation. With the SDM of pelvis and its segmented surface, an accurate 3D pelvimetry system is designed and implemented to measure a comprehensive set of pelvic geometric parameters for the examination of the relationship between these parameters and the difficulty of robotic-assisted laparoscopic radical prostatectomy. This system can be used in both manual and automated manner with a user-friendly interface.
A fully automated and robust multi-atlas based segmentation has also been developed to delineate the prostate in diagnostic MR scans, which have large variation in both intensity and shape of prostate. Two image analysis techniques are proposed, including patch-based label fusion with local appearance-specific atlases and multi-atlas propagation via a manifold graph on a database of both labeled and unlabeled images when limited labeled atlases are available. The proposed techniques can achieve more robust and accurate segmentation results than other multi-atlas based methods.
The seminal vesicles are also an interesting structure for therapy planning, particularly for external-beam radiation therapy. As existing methods fail for the very onerous task of segmenting the seminal vesicles, a multi-atlas learning framework via random decision forests with graph cuts refinement has further been proposed to solve this difficult problem. Motivated by the performance of this technique, I further extend the multi-atlas learning to segment the prostate fully automatically using multispectral (T1 and T2-weighted) MR images via hybrid \ac{RF} classifiers and a multi-image graph cuts technique. The proposed method compares favorably to the previously proposed multi-atlas based prostate segmentation.
The work in this thesis covers different techniques for pelvic image segmentation in MRI. These techniques have been continually developed and refined, and their application to different specific problems shows ever more promising results.Open Acces
- …