23 research outputs found
Contrastive Registration for Unsupervised Medical Image Segmentation
Medical image segmentation is a relevant task as it serves as the first step
for several diagnosis processes, thus it is indispensable in clinical usage.
Whilst major success has been reported using supervised techniques, they assume
a large and well-representative labelled set. This is a strong assumption in
the medical domain where annotations are expensive, time-consuming, and
inherent to human bias. To address this problem, unsupervised techniques have
been proposed in the literature yet it is still an open problem due to the
difficulty of learning any transformation pattern. In this work, we present a
novel optimisation model framed into a new CNN-based contrastive registration
architecture for unsupervised medical image segmentation. The core of our
approach is to exploit image-level registration and feature-level from a
contrastive learning mechanism, to perform registration-based segmentation.
Firstly, we propose an architecture to capture the image-to-image
transformation pattern via registration for unsupervised medical image
segmentation. Secondly, we embed a contrastive learning mechanism into the
registration architecture to enhance the discriminating capacity of the network
in the feature-level. We show that our proposed technique mitigates the major
drawbacks of existing unsupervised techniques. We demonstrate, through
numerical and visual experiments, that our technique substantially outperforms
the current state-of-the-art unsupervised segmentation methods on two major
medical image datasets.Comment: 11 pages, 3 figure
Dual-Stream Pyramid Registration Network
We propose a Dual-Stream Pyramid Registration Network (referred as
Dual-PRNet) for unsupervised 3D medical image registration. Unlike recent
CNN-based registration approaches, such as VoxelMorph, which explores a
single-stream encoder-decoder network to compute a registration fields from a
pair of 3D volumes, we design a two-stream architecture able to compute
multi-scale registration fields from convolutional feature pyramids. Our
contributions are two-fold: (i) we design a two-stream 3D encoder-decoder
network which computes two convolutional feature pyramids separately for a pair
of input volumes, resulting in strong deep representations that are meaningful
for deformation estimation; (ii) we propose a pyramid registration module able
to predict multi-scale registration fields directly from the decoding feature
pyramids. This allows it to refine the registration fields gradually in a
coarse-to-fine manner via sequential warping, and enable the model with the
capability for handling significant deformations between two volumes, such as
large displacements in spatial domain or slice space. The proposed Dual-PRNet
is evaluated on two standard benchmarks for brain MRI registration, where it
outperforms the state-of-the-art approaches by a large margin, e.g., having
improvements over recent VoxelMorph [2] with 0.683->0.778 on the LPBA40, and
0.511->0.631 on the Mindboggle101, in term of average Dice score.Comment: To appear in MICCAI 2019 (Oral
A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond
Over the past decade, deep learning technologies have greatly advanced the
field of medical image registration. The initial developments, such as
ResNet-based and U-Net-based networks, laid the groundwork for deep
learning-driven image registration. Subsequent progress has been made in
various aspects of deep learning-based registration, including similarity
measures, deformation regularizations, and uncertainty estimation. These
advancements have not only enriched the field of deformable image registration
but have also facilitated its application in a wide range of tasks, including
atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D
registration. In this paper, we present a comprehensive overview of the most
recent advancements in deep learning-based image registration. We begin with a
concise introduction to the core concepts of deep learning-based image
registration. Then, we delve into innovative network architectures, loss
functions specific to registration, and methods for estimating registration
uncertainty. Additionally, this paper explores appropriate evaluation metrics
for assessing the performance of deep learning models in registration tasks.
Finally, we highlight the practical applications of these novel techniques in
medical imaging and discuss the future prospects of deep learning-based image
registration
Non-iterative Coarse-to-fine Transformer Networks for Joint Affine and Deformable Image Registration
Image registration is a fundamental requirement for medical image analysis.
Deep registration methods based on deep learning have been widely recognized
for their capabilities to perform fast end-to-end registration. Many deep
registration methods achieved state-of-the-art performance by performing
coarse-to-fine registration, where multiple registration steps were iterated
with cascaded networks. Recently, Non-Iterative Coarse-to-finE (NICE)
registration methods have been proposed to perform coarse-to-fine registration
in a single network and showed advantages in both registration accuracy and
runtime. However, existing NICE registration methods mainly focus on deformable
registration, while affine registration, a common prerequisite, is still
reliant on time-consuming traditional optimization-based methods or extra
affine registration networks. In addition, existing NICE registration methods
are limited by the intrinsic locality of convolution operations. Transformers
may address this limitation for their capabilities to capture long-range
dependency, but the benefits of using transformers for NICE registration have
not been explored. In this study, we propose a Non-Iterative Coarse-to-finE
Transformer network (NICE-Trans) for image registration. Our NICE-Trans is the
first deep registration method that (i) performs joint affine and deformable
coarse-to-fine registration within a single network, and (ii) embeds
transformers into a NICE registration framework to model long-range relevance
between images. Extensive experiments with seven public datasets show that our
NICE-Trans outperforms state-of-the-art registration methods on both
registration accuracy and runtime.Comment: Accepted at International Conference on Medical Image Computing and
Computer Assisted Intervention (MICCAI 2023