21,890 research outputs found
Automatic 3D bi-ventricular segmentation of cardiac images by a shape-refined multi-task deep learning approach
Deep learning approaches have achieved state-of-the-art performance in
cardiac magnetic resonance (CMR) image segmentation. However, most approaches
have focused on learning image intensity features for segmentation, whereas the
incorporation of anatomical shape priors has received less attention. In this
paper, we combine a multi-task deep learning approach with atlas propagation to
develop a shape-constrained bi-ventricular segmentation pipeline for short-axis
CMR volumetric images. The pipeline first employs a fully convolutional network
(FCN) that learns segmentation and landmark localisation tasks simultaneously.
The architecture of the proposed FCN uses a 2.5D representation, thus combining
the computational advantage of 2D FCNs networks and the capability of
addressing 3D spatial consistency without compromising segmentation accuracy.
Moreover, the refinement step is designed to explicitly enforce a shape
constraint and improve segmentation quality. This step is effective for
overcoming image artefacts (e.g. due to different breath-hold positions and
large slice thickness), which preclude the creation of anatomically meaningful
3D cardiac shapes. The proposed pipeline is fully automated, due to network's
ability to infer landmarks, which are then used downstream in the pipeline to
initialise atlas propagation. We validate the pipeline on 1831 healthy subjects
and 649 subjects with pulmonary hypertension. Extensive numerical experiments
on the two datasets demonstrate that our proposed method is robust and capable
of producing accurate, high-resolution and anatomically smooth bi-ventricular
3D models, despite the artefacts in input CMR volumes
Learning Deep Similarity Metric for 3D MR-TRUS Registration
Purpose: The fusion of transrectal ultrasound (TRUS) and magnetic resonance
(MR) images for guiding targeted prostate biopsy has significantly improved the
biopsy yield of aggressive cancers. A key component of MR-TRUS fusion is image
registration. However, it is very challenging to obtain a robust automatic
MR-TRUS registration due to the large appearance difference between the two
imaging modalities. The work presented in this paper aims to tackle this
problem by addressing two challenges: (i) the definition of a suitable
similarity metric and (ii) the determination of a suitable optimization
strategy.
Methods: This work proposes the use of a deep convolutional neural network to
learn a similarity metric for MR-TRUS registration. We also use a composite
optimization strategy that explores the solution space in order to search for a
suitable initialization for the second-order optimization of the learned
metric. Further, a multi-pass approach is used in order to smooth the metric
for optimization.
Results: The learned similarity metric outperforms the classical mutual
information and also the state-of-the-art MIND feature based methods. The
results indicate that the overall registration framework has a large capture
range. The proposed deep similarity metric based approach obtained a mean TRE
of 3.86mm (with an initial TRE of 16mm) for this challenging problem.
Conclusion: A similarity metric that is learned using a deep neural network
can be used to assess the quality of any given image registration and can be
used in conjunction with the aforementioned optimization framework to perform
automatic registration that is robust to poor initialization.Comment: To appear on IJCAR
Learning Rigid Image Registration - Utilizing Convolutional Neural Networks for Medical Image Registration
Many traditional computer vision tasks, such as segmentation, have seen large step-changes in accuracy and/or speed with the application of Convolutional Neural Networks (CNNs). Image registration, the alignment of two or more images to a common space, is a fundamental step in many medical imaging workflows. In this paper we investigate whether these techniques can also bring tangible benefits to the registration task. We describe and evaluate the use of convolutional neural networks (CNNs) for both mono- and multi- modality registration and compare their performance to more traditional schemes, namely multi-scale, iterative registration. This paper also investigates incorporating inverse consistency of the learned spatial transformations to impose additional constraints on the network during training and investigate any benefit in accuracy during detection. The approaches are validated with a series of artificial mono-modal registration tasks utilizing T1-weighted MR brain i mages from the Open Access Series of Imaging Studies (OASIS) study and IXI brain development dataset and a series of real multi-modality registration tasks using T1-weighted and T2-weighted MR brain images from the 2015 Ischemia Stroke Lesion segmentation (ISLES) challenge. The results demonstrate that CNNs give excellent performance for both mono- and multi- modality head and neck registration compared to the baseline method with significantly fewer outliers and lower mean errors
- …