24 research outputs found
Recommended from our members
Automated CT and MRI Liver Segmentation and Biometry Using a Generalized Convolutional Neural Network.
PurposeTo assess feasibility of training a convolutional neural network (CNN) to automate liver segmentation across different imaging modalities and techniques used in clinical practice and apply this to enable automation of liver biometry.MethodsWe trained a 2D U-Net CNN for liver segmentation in two stages using 330 abdominal MRI and CT exams acquired at our institution. First, we trained the neural network with non-contrast multi-echo spoiled-gradient-echo (SGPR)images with 300 MRI exams to provide multiple signal-weightings. Then, we used transfer learning to generalize the CNN with additional images from 30 contrast-enhanced MRI and CT exams.We assessed the performance of the CNN using a distinct multi-institutional data set curated from multiple sources (n = 498 subjects). Segmentation accuracy was evaluated by computing Dice scores. Utilizing these segmentations, we computed liver volume from CT and T1-weighted (T1w) MRI exams, and estimated hepatic proton- density-fat-fraction (PDFF) from multi-echo T2*w MRI exams. We compared quantitative volumetry and PDFF estimates between automated and manual segmentation using Pearson correlation and Bland-Altman statistics.ResultsDice scores were 0.94 ± 0.06 for CT (n = 230), 0.95 ± 0.03 (n = 100) for T1w MR, and 0.92 ± 0.05 for T2*w MR (n = 169). Liver volume measured by manual and automated segmentation agreed closely for CT (95% limit-of-agreement (LoA) = [-298 mL, 180 mL]) and T1w MR (LoA = [-358 mL, 180 mL]). Hepatic PDFF measured by the two segmentations also agreed closely (LoA = [-0.62%, 0.80%]).ConclusionsUtilizing a transfer-learning strategy, we have demonstrated the feasibility of a CNN to be generalized to perform liver segmentations across different imaging techniques and modalities. With further refinement and validation, CNNs may have broad applicability for multimodal liver volumetry and hepatic tissue characterization
Attention Gated Networks: Learning to Leverage Salient Regions in Medical Images
We propose a novel attention gate (AG) model for medical image analysis that
automatically learns to focus on target structures of varying shapes and sizes.
Models trained with AGs implicitly learn to suppress irrelevant regions in an
input image while highlighting salient features useful for a specific task.
This enables us to eliminate the necessity of using explicit external
tissue/organ localisation modules when using convolutional neural networks
(CNNs). AGs can be easily integrated into standard CNN models such as VGG or
U-Net architectures with minimal computational overhead while increasing the
model sensitivity and prediction accuracy. The proposed AG models are evaluated
on a variety of tasks, including medical image classification and segmentation.
For classification, we demonstrate the use case of AGs in scan plane detection
for fetal ultrasound screening. We show that the proposed attention mechanism
can provide efficient object localisation while improving the overall
prediction performance by reducing false positives. For segmentation, the
proposed architecture is evaluated on two large 3D CT abdominal datasets with
manual annotations for multiple organs. Experimental results show that AG
models consistently improve the prediction performance of the base
architectures across different datasets and training sizes while preserving
computational efficiency. Moreover, AGs guide the model activations to be
focused around salient regions, which provides better insights into how model
predictions are made. The source code for the proposed AG models is publicly
available.Comment: Accepted for Medical Image Analysis (Special Issue on Medical Imaging
with Deep Learning). arXiv admin note: substantial text overlap with
arXiv:1804.03999, arXiv:1804.0533
Prior Guided Deep Difference Meta-Learner for Fast Adaptation to Stylized Segmentation
When a pre-trained general auto-segmentation model is deployed at a new
institution, a support framework in the proposed Prior-guided DDL network will
learn the systematic difference between the model predictions and the final
contours revised and approved by clinicians for an initial group of patients.
The learned style feature differences are concatenated with the new patients
(query) features and then decoded to get the style-adapted segmentations. The
model is independent of practice styles and anatomical structures. It
meta-learns with simulated style differences and does not need to be exposed to
any real clinical stylized structures during training. Once trained on the
simulated data, it can be deployed for clinical use to adapt to new practice
styles and new anatomical structures without further training.
To show the proof of concept, we tested the Prior-guided DDL network on six
different practice style variations for three different anatomical structures.
Pre-trained segmentation models were adapted from post-operative clinical
target volume (CTV) segmentation to segment CTVstyle1, CTVstyle2, and
CTVstyle3, from parotid gland segmentation to segment Parotidsuperficial, and
from rectum segmentation to segment Rectumsuperior and Rectumposterior. The
mode performance was quantified with Dice Similarity Coefficient (DSC). With
adaptation based on only the first three patients, the average DSCs were
improved from 78.6, 71.9, 63.0, 52.2, 46.3 and 69.6 to 84.4, 77.8, 73.0, 77.8,
70.5, 68.1, for CTVstyle1, CTVstyle2, and CTVstyle3, Parotidsuperficial,
Rectumsuperior, and Rectumposterior, respectively, showing the great potential
of the Priorguided DDL network for a fast and effortless adaptation to new
practice style
Learnable Weight Initialization for Volumetric Medical Image Segmentation
Hybrid volumetric medical image segmentation models, combining the advantages
of local convolution and global attention, have recently received considerable
attention. While mainly focusing on architectural modifications, most existing
hybrid approaches still use conventional data-independent weight initialization
schemes which restrict their performance due to ignoring the inherent
volumetric nature of the medical data. To address this issue, we propose a
learnable weight initialization approach that utilizes the available medical
training data to effectively learn the contextual and structural cues via the
proposed self-supervised objectives. Our approach is easy to integrate into any
hybrid model and requires no external training data. Experiments on multi-organ
and lung cancer segmentation tasks demonstrate the effectiveness of our
approach, leading to state-of-the-art segmentation performance. Our source code
and models are available at: https://github.com/ShahinaKK/LWI-VMS.Comment: Technical Repor