137 research outputs found
Hierarchical Uncertainty Estimation for Medical Image Segmentation Networks
Learning a medical image segmentation model is an inherently ambiguous task,
as uncertainties exist in both images (noise) and manual annotations (human
errors and bias) used for model training. To build a trustworthy image
segmentation model, it is important to not just evaluate its performance but
also estimate the uncertainty of the model prediction. Most state-of-the-art
image segmentation networks adopt a hierarchical encoder architecture,
extracting image features at multiple resolution levels from fine to coarse. In
this work, we leverage this hierarchical image representation and propose a
simple yet effective method for estimating uncertainties at multiple levels.
The multi-level uncertainties are modelled via the skip-connection module and
then sampled to generate an uncertainty map for the predicted image
segmentation. We demonstrate that a deep learning segmentation network such as
U-net, when implemented with such hierarchical uncertainty estimation module,
can achieve a high segmentation performance, while at the same time provide
meaningful uncertainty maps that can be used for out-of-distribution detection.Comment: 8 pages, 3 figure
DeepMesh: Mesh-based Cardiac Motion Tracking using Deep Learning
3D motion estimation from cine cardiac magnetic resonance (CMR) images is
important for the assessment of cardiac function and the diagnosis of
cardiovascular diseases. Current state-of-the art methods focus on estimating
dense pixel-/voxel-wise motion fields in image space, which ignores the fact
that motion estimation is only relevant and useful within the anatomical
objects of interest, e.g., the heart. In this work, we model the heart as a 3D
mesh consisting of epi- and endocardial surfaces. We propose a novel learning
framework, DeepMesh, which propagates a template heart mesh to a subject space
and estimates the 3D motion of the heart mesh from CMR images for individual
subjects. In DeepMesh, the heart mesh of the end-diastolic frame of an
individual subject is first reconstructed from the template mesh. Mesh-based 3D
motion fields with respect to the end-diastolic frame are then estimated from
2D short- and long-axis CMR images. By developing a differentiable
mesh-to-image rasterizer, DeepMesh is able to leverage 2D shape information
from multiple anatomical views for 3D mesh reconstruction and mesh motion
estimation. The proposed method estimates vertex-wise displacement and thus
maintains vertex correspondences between time frames, which is important for
the quantitative assessment of cardiac function across different subjects and
populations. We evaluate DeepMesh on CMR images acquired from the UK Biobank.
We focus on 3D motion estimation of the left ventricle in this work.
Experimental results show that the proposed method quantitatively and
qualitatively outperforms other image-based and mesh-based cardiac motion
tracking methods
Subject-Specific Lesion Generation and Pseudo-Healthy Synthesis for Multiple Sclerosis Brain Images
Understanding the intensity characteristics of brain lesions is key for
defining image-based biomarkers in neurological studies and for predicting
disease burden and outcome. In this work, we present a novel foreground-based
generative method for modelling the local lesion characteristics that can both
generate synthetic lesions on healthy images and synthesize subject-specific
pseudo-healthy images from pathological images. Furthermore, the proposed
method can be used as a data augmentation module to generate synthetic images
for training brain image segmentation networks. Experiments on multiple
sclerosis (MS) brain images acquired on magnetic resonance imaging (MRI)
demonstrate that the proposed method can generate highly realistic
pseudo-healthy and pseudo-pathological brain images. Data augmentation using
the synthetic images improves the brain image segmentation performance compared
to traditional data augmentation methods as well as a recent lesion-aware data
augmentation technique, CarveMix. The code will be released at
https://github.com/dogabasaran/lesion-synthesis.Comment: 13 pages, 6 figures, 2022 MICCAI SASHIMI (Simulation and Synthesis in
Medical Imaging) Workshop pape
Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies
Multi-atlas segmentation is a widely used tool in medical image analysis,
providing robust and accurate results by learning from annotated atlas
datasets. However, the availability of fully annotated atlas images for
training is limited due to the time required for the labelling task.
Segmentation methods requiring only a proportion of each atlas image to be
labelled could therefore reduce the workload on expert raters tasked with
annotating atlas images. To address this issue, we first re-examine the
labelling problem common in many existing approaches and formulate its solution
in terms of a Markov Random Field energy minimisation problem on a graph
connecting atlases and the target image. This provides a unifying framework for
multi-atlas segmentation. We then show how modifications in the graph
configuration of the proposed framework enable the use of partially annotated
atlas images and investigate different partial annotation strategies. The
proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets
for hippocampal and cardiac segmentation. Experiments were performed aimed at
(1) recreating existing segmentation techniques with the proposed framework and
(2) demonstrating the potential of employing sparsely annotated atlas data for
multi-atlas segmentation
Automatic 3D bi-ventricular segmentation of cardiac images by a shape-refined multi-task deep learning approach
Deep learning approaches have achieved state-of-the-art performance in
cardiac magnetic resonance (CMR) image segmentation. However, most approaches
have focused on learning image intensity features for segmentation, whereas the
incorporation of anatomical shape priors has received less attention. In this
paper, we combine a multi-task deep learning approach with atlas propagation to
develop a shape-constrained bi-ventricular segmentation pipeline for short-axis
CMR volumetric images. The pipeline first employs a fully convolutional network
(FCN) that learns segmentation and landmark localisation tasks simultaneously.
The architecture of the proposed FCN uses a 2.5D representation, thus combining
the computational advantage of 2D FCNs networks and the capability of
addressing 3D spatial consistency without compromising segmentation accuracy.
Moreover, the refinement step is designed to explicitly enforce a shape
constraint and improve segmentation quality. This step is effective for
overcoming image artefacts (e.g. due to different breath-hold positions and
large slice thickness), which preclude the creation of anatomically meaningful
3D cardiac shapes. The proposed pipeline is fully automated, due to network's
ability to infer landmarks, which are then used downstream in the pipeline to
initialise atlas propagation. We validate the pipeline on 1831 healthy subjects
and 649 subjects with pulmonary hypertension. Extensive numerical experiments
on the two datasets demonstrate that our proposed method is robust and capable
of producing accurate, high-resolution and anatomically smooth bi-ventricular
3D models, despite the artefacts in input CMR volumes
- …