702 research outputs found

    Methods for three-dimensional Registration of Multimodal Abdominal Image Data

    Get PDF
    Multimodal image registration benefits the diagnosis, treatment planning and the performance of image-guided procedures in the liver, since it enables the fusion of complementary information provided by pre- and intrainterventional data about tumor localization and access. Although there exist various registration methods, approaches which are specifically optimized for the registration of multimodal abdominal scans are only scarcely available. The work presented in this thesis aims to tackle this problem by focusing on the development, optimization and evaluation of registration methods specifically for the registration of multimodal liver scans. The contributions to the research field of medical image registration include the development of a registration evaluation methodology that enables the comparison and optimization of linear and non-linear registration algorithms using a point-based accuracy measure. This methodology has been used to benchmark standard registration methods as well as novel approaches that were developed within the frame of this thesis. The results of the methodology showed that the employed similarity measure used during the registration has a major impact on the registration accuracy of the method. Due to this influence, two alternative similarity metrics bearing the potential to be used on multimodal image data are proposed and evaluated. The first metric relies on the use of gradient information in form of Histograms of Oriented Gradients (HOG) whereas the second metric employs a siamese neural network to learn a similarity measure directly on the image data. The evaluation showed, that both metrics could compete with state of the art similarity measures in terms of registration accuracy. The HOG-metric offers the advantage that it does not require ground truth data to learn a similarity estimation, but instead it is applicable to various data sets with the sole requirement of distinct gradients. However, the Siamese metric is characterized by a higher robustness for large rotations than the HOG-metric. To train such a network, registered ground truth data is required which may be critical for multimodal image data. Yet, the results show that it is possible to apply models trained on registered synthetic data on real patient data. The last part of this thesis focuses on methods to learn an entire registration process using neural networks, thereby offering the advantage to replace the traditional, time-consuming iterative registration procedure. Within the frame of this thesis, the so-called VoxelMorph network which was originally proposed for monomodal, non-linear registration learning is extended for affine and multimodal registration learning tasks. This extension includes the consideration of an image mask during metric evaluation as well as loss functions for multimodal data, such as the pretrained Siamese metric and a loss relying on the comparison of deformation fields. Based on the developed registration evaluation methodology, the performance of the original network as well as the extended variants are evaluated for monomodal and multimodal registration tasks using multiple data sets. With the extended network variants, it is possible to learn an entire multimodal registration process for the correction of large image displacements. As for the Siamese metric, the results imply a general transferability of models trained with synthetic data to registration tasks including real patient data. Due to the lack of multimodal ground truth data, this transfer represents an important step towards making Deep Learning based registration procedures clinically usable

    Unsupervised Echocardiography Registration through Patch-based MLPs and Transformers

    Full text link
    Image registration is an essential but challenging task in medical image computing, especially for echocardiography, where the anatomical structures are relatively noisy compared to other imaging modalities. Traditional (non-learning) registration approaches rely on the iterative optimization of a similarity metric which is usually costly in time complexity. In recent years, convolutional neural network (CNN) based image registration methods have shown good effectiveness. In the meantime, recent studies show that the attention-based model (e.g., Transformer) can bring superior performance in pattern recognition tasks. In contrast, whether the superior performance of the Transformer comes from the long-winded architecture or is attributed to the use of patches for dividing the inputs is unclear yet. This work introduces three patch-based frameworks for image registration using MLPs and transformers. We provide experiments on 2D-echocardiography registration to answer the former question partially and provide a benchmark solution. Our results on a large public 2D echocardiography dataset show that the patch-based MLP/Transformer model can be effectively used for unsupervised echocardiography registration. They demonstrate comparable and even better registration performance than a popular CNN registration model. In particular, patch-based models better preserve volume changes in terms of Jacobian determinants, thus generating robust registration fields with less unrealistic deformation. Our results demonstrate that patch-based learning methods, whether with attention or not, can perform high-performance unsupervised registration tasks with adequate time and space complexity. Our codes are available https://gitlab.inria.fr/epione/mlp\_transformer\_registratio

    Registration of 3D Ultrasound Volumes with Applications in Neurosurgery and Prostate Radiotherapy

    Get PDF
    Brain tissue deforms significantly after opening the dura and during tumor resection, invalidating pre-operative imaging data. Ultrasound is a popular imaging modality for providing the neurosurgeon with real-time updated images of brain tissue. Interpretation of post-resection ultrasound images is difficult due to large brain shift and tissue resection. Furthermore, several factors degrade the quality of post-resection ultrasound images such as strong reflection of waves at the interface of saline water and brain tissue in resection cavities, air bubbles and the application of blood-clotting agents around the edges of resection. Image registration allows comparison of post-resection ultrasound images with higher quality pre-resection images, assists in interpretation of post-resection images and may help identify residual tumor, and as such, is of significant clinical importance. Prostate motion is known to reduce the precision of prostate radiotherapy. This motion can be categorized into intrafraction and interfraction. Interfraction motion introduces large systematic errors into the treatment and is the largest contributor to prostate planning treatment volume (PTV) margins. Conventional solutions to interfraction motion all have respective drawbacks. Clarity Autoscan system provides continuous ultrasound imaging of the prostate for interfraction motion correction, however it is time-consuming and can have large interobserver errors. The intension of accurately targeting the prostate and reducing the side effects in treatment requests a faster and more accurate registration framework for interfraction motion correction. In this thesis, we first propose a registration framework called Nonrigid Symmetric Registration (NSR) for accurate alignment of pre- and post-resection volumetric ultrasound images in near real-time. An outlier detection algorithm is proposed and utilized in this framework to identify non-corresponding regions (outliers) and therefore improve the robustness and accuracy of registration. We use an Efficient Second-order Minimization (ESM) method for fast and robust optimization. A symmetric and inverse-consistent method is exploited to generate realistic deformation fields. The results show that NSR significantly improves the quality of alignment between pre- and post-resection ultrasound images. Then based on this framework, we develop a rigid registration framework called Prostate Registration Framework (PRF) for alignment of the prosate region in simulation and treatment volumes. PRF is trained using 2 3D transperineal ultrasound (TPUS) images of an ultrasound prostate phantom and 20 3D TPUS images from 11 patients receiving Clarity Autoscan. Algorithm performance is evaluated using further 21 TPUS images from a total of 8 patients by comparison of the PRF with manual matching of landmarks and Clarity-based estimation of interfraction motion performed by three observers. The results show that PRF outputs more accurate alignment of the prosate region in simulation and treatment volumes than Clarity, and further, provides the reposition of the prostate in treatment images efficiently and accurately

    An Investigation of Methods for CT Synthesis in MR-only Radiotherapy

    Get PDF
    corecore