21 research outputs found
A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond
Over the past decade, deep learning technologies have greatly advanced the
field of medical image registration. The initial developments, such as
ResNet-based and U-Net-based networks, laid the groundwork for deep
learning-driven image registration. Subsequent progress has been made in
various aspects of deep learning-based registration, including similarity
measures, deformation regularizations, and uncertainty estimation. These
advancements have not only enriched the field of deformable image registration
but have also facilitated its application in a wide range of tasks, including
atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D
registration. In this paper, we present a comprehensive overview of the most
recent advancements in deep learning-based image registration. We begin with a
concise introduction to the core concepts of deep learning-based image
registration. Then, we delve into innovative network architectures, loss
functions specific to registration, and methods for estimating registration
uncertainty. Additionally, this paper explores appropriate evaluation metrics
for assessing the performance of deep learning models in registration tasks.
Finally, we highlight the practical applications of these novel techniques in
medical imaging and discuss the future prospects of deep learning-based image
registration
CNN-based Lung CT Registration with Multiple Anatomical Constraints
Deep-learning-based registration methods emerged as a fast alternative to
conventional registration methods. However, these methods often still cannot
achieve the same performance as conventional registration methods because they
are either limited to small deformation or they fail to handle a superposition
of large and small deformations without producing implausible deformation
fields with foldings inside.
In this paper, we identify important strategies of conventional registration
methods for lung registration and successfully developed the deep-learning
counterpart. We employ a Gaussian-pyramid-based multilevel framework that can
solve the image registration optimization in a coarse-to-fine fashion.
Furthermore, we prevent foldings of the deformation field and restrict the
determinant of the Jacobian to physiologically meaningful values by combining a
volume change penalty with a curvature regularizer in the loss function.
Keypoint correspondences are integrated to focus on the alignment of smaller
structures.
We perform an extensive evaluation to assess the accuracy, the robustness,
the plausibility of the estimated deformation fields, and the transferability
of our registration approach. We show that it achieves state-of-the-art results
on the COPDGene dataset compared to conventional registration method with much
shorter execution time. In our experiments on the DIRLab exhale to inhale lung
registration, we demonstrate substantial improvements (TRE below mm) over
other deep learning methods. Our algorithm is publicly available at
https://grand-challenge.org/algorithms/deep-learning-based-ct-lung-registration/
An End-to-end Deep Learning Approach for Landmark Detection and Matching in Medical Images
Anatomical landmark correspondences in medical images can provide additional
guidance information for the alignment of two images, which, in turn, is
crucial for many medical applications. However, manual landmark annotation is
labor-intensive. Therefore, we propose an end-to-end deep learning approach to
automatically detect landmark correspondences in pairs of two-dimensional (2D)
images. Our approach consists of a Siamese neural network, which is trained to
identify salient locations in images as landmarks and predict matching
probabilities for landmark pairs from two different images. We trained our
approach on 2D transverse slices from 168 lower abdominal Computed Tomography
(CT) scans. We tested the approach on 22,206 pairs of 2D slices with varying
levels of intensity, affine, and elastic transformations. The proposed approach
finds an average of 639, 466, and 370 landmark matches per image pair for
intensity, affine, and elastic transformations, respectively, with spatial
matching errors of at most 1 mm. Further, more than 99% of the landmark pairs
are within a spatial matching error of 2 mm, 4 mm, and 8 mm for image pairs
with intensity, affine, and elastic transformations, respectively. To
investigate the utility of our developed approach in a clinical setting, we
also tested our approach on pairs of transverse slices selected from follow-up
CT scans of three patients. Visual inspection of the results revealed landmark
matches in both bony anatomical regions as well as in soft tissues lacking
prominent intensity gradients.Comment: SPIE Medical Imaging Conference - 202
Model-based registration for pneumothorax deformation analysis using intraoperative cone-beam CT images
[2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 20-24 July 2020, Montreal, QC, Canada]Because the lung deforms during surgery because of pneumothorax, it is important to be able to track the location of a tumor. Deformation of the whole lung can be estimated using intraoperative cone-beam CT (CBCT) images. In this study, we used deformable mesh registration methods for paired CBCT images in the inflated and deflated states, and analyzed their deformation. We proposed a deformable mesh registration framework for deformations of partial organ shapes involving large deformation and rotation. Experimental results showed that the proposed methods reduced errors in point-to-point correspondence. As a result of registration using surgical clips placed on the lung surface during imaging, it was confirmed that an average error of 3.9 mm occurred in eight cases. The result of analysis showed that both tissue rotation and contraction had large effects on displacement
An end-to-end deep learning approach for landmark detection and matching in medical images
Anatomical landmark correspondences in medical images can provide additional guidance information for the alignment of two images, which, in turn, is crucial for many medical applications. However, manual landmark annotation is labor-intensive. Therefore, we propose an end-to-end deep learning approach to automatically detect landmark correspondences in pairs of two-dimensional (2D) images. Our approach consists of a Siamese neural network, which is trained to identify salient locations in images as landmarks and predict matching probabilities for landmark pairs from two different images. We trained our approach on 2D transverse slices from 168 lower abdominal Computed Tomography (CT) scans. We tested the approach on 22,206 pairs of 2D slices with varying levels of intensity, affine, and elastic transformations. The proposed approach finds an average of 639, 466, and 370 landmark matches per image pair for intensity, affine, and elastic transformations, respectively, with spatial matching errors of at most 1 mm. Further, more than 99% of the landmark pairs are within a spatial