20 research outputs found
Automatic 3D bi-ventricular segmentation of cardiac images by a shape-refined multi-task deep learning approach
Deep learning approaches have achieved state-of-the-art performance in
cardiac magnetic resonance (CMR) image segmentation. However, most approaches
have focused on learning image intensity features for segmentation, whereas the
incorporation of anatomical shape priors has received less attention. In this
paper, we combine a multi-task deep learning approach with atlas propagation to
develop a shape-constrained bi-ventricular segmentation pipeline for short-axis
CMR volumetric images. The pipeline first employs a fully convolutional network
(FCN) that learns segmentation and landmark localisation tasks simultaneously.
The architecture of the proposed FCN uses a 2.5D representation, thus combining
the computational advantage of 2D FCNs networks and the capability of
addressing 3D spatial consistency without compromising segmentation accuracy.
Moreover, the refinement step is designed to explicitly enforce a shape
constraint and improve segmentation quality. This step is effective for
overcoming image artefacts (e.g. due to different breath-hold positions and
large slice thickness), which preclude the creation of anatomically meaningful
3D cardiac shapes. The proposed pipeline is fully automated, due to network's
ability to infer landmarks, which are then used downstream in the pipeline to
initialise atlas propagation. We validate the pipeline on 1831 healthy subjects
and 649 subjects with pulmonary hypertension. Extensive numerical experiments
on the two datasets demonstrate that our proposed method is robust and capable
of producing accurate, high-resolution and anatomically smooth bi-ventricular
3D models, despite the artefacts in input CMR volumes
Analyzing fibrous tissue pattern in fibrous dysplasia bone images using deep R-CNN networks for segmentation
Predictive health monitoring systems help to detect human health threats in the early stage. Evolving deep learning techniques in medical image analysis results in efficient feedback in quick time. Fibrous dysplasia (FD) is a genetic disorder, triggered by the mutation in Guanine Nucleotide binding protein with alpha stimulatory activities in the human bone genesis. It slowly occupies the bone marrow and converts the bone cell into fibrous tissues. It weakens the bone structure and leads to permanent disability. This paper proposes the study of FD bone image analyzing techniques with deep networks. Also, the linear regression model is annotated for predicting the bone abnormality levels with observed coefficients. Modern image processing begins with various image filters. It describes the edges, shades, texture values of the receptive field. Different types of segmentation and edge detection mechanisms are applied to locate the tumor, lesion, and fibrous tissues in the bone image. Extract the fibrous region in the bone image using the region-based convolutional neural network algorithm. The segmented results are compared with their accuracy metrics. The segmentation loss is reduced by each iteration. The overall loss is 0.24% and the accuracy is 99%, segmenting the masked region produces 98% of accuracy, and building the bounding boxes is 99% of accuracy
Fully Automated Segmentation of the Left Ventricle in Magnetic Resonance Images
Automatic and robust segmentation of the left ventricle (LV) in magnetic
resonance images (MRI) has remained challenging for many decades. With the
great success of deep learning in object detection and classification, the
research focus of LV segmentation has changed to convolutional neural network
(CNN) in recent years. However, LV segmentation is a pixel-level classification
problem and its categories are intractable compared to object detection and
classification. Although lots of CNN based methods have been proposed for LV
segmentation, no robust and reproducible results are achieved yet. In this
paper, we try to reproduce the CNN based LV segmentation methods with their
disclosed codes and trained CNN models. Not surprisingly, the reproduced
results are significantly worse than their claimed accuracies. We also proposed
a fully automated LV segmentation method based on slope difference distribution
(SDD) threshold selection to compare with the reproduced CNN methods. The
proposed method achieved 95.44% DICE score on the test set of automated cardiac
diagnosis challenge (ACDC) while the two compared CNN methods achieved 90.28%
and 87.13% DICE scores. Our achieved accuracy is also higher than the best
accuracy reported in the published literatures. The MATLAB codes of our
proposed method are freely available on line
Deep Learning-based Automated Aortic Area and Distensibility Assessment: The Multi-Ethnic Study of Atherosclerosis (MESA)
This study applies convolutional neural network (CNN)-based automatic
segmentation and distensibility measurement of the ascending and descending
aorta from 2D phase-contrast cine magnetic resonance imaging (PC-cine MRI)
within the large MESA cohort with subsequent assessment on an external cohort
of thoracic aortic aneurysm (TAA) patients. 2D PC-cine MRI images of the
ascending and descending aorta at the pulmonary artery bifurcation from the
MESA study were included. Train, validation, and internal test sets consisted
of 1123 studies (24282 images), 374 studies (8067 images), and 375 studies
(8069 images), respectively. An external test set of TAAs consisted of 37
studies (3224 images). A U-Net based CNN was constructed, and performance was
evaluated utilizing dice coefficient (for segmentation) and concordance
correlation coefficients (CCC) of aortic geometric parameters by comparing to
manual segmentation and parameter estimation. Dice coefficients for aorta
segmentation were 97.6% (CI: 97.5%-97.6%) and 93.6% (84.6%-96.7%) on the
internal and external test of TAAs, respectively. CCC for comparison of manual
and CNN maximum and minimum ascending aortic areas were 0.97 and 0.95,
respectively, on the internal test set and 0.997 and 0.995, respectively, for
the external test. CCCs for maximum and minimum descending aortic areas were
0.96 and 0. 98, respectively, on the internal test set and 0.93 and 0.93,
respectively, on the external test set. We successfully developed and validated
a U-Net based ascending and descending aortic segmentation and distensibility
quantification model in a large multi-ethnic database and in an external cohort
of TAA patients.Comment: 25 pages, 5 figure
Introduction of Lazy Luna an automatic software-driven multilevel comparison of ventricular function quantification in cardiovascular magnetic resonance imaging
Cardiovascular magnetic resonance imaging is the gold standard for cardiac function assessment. Quantification of clinical results (CR) requires precise segmentation. Clinicians statistically compare CRs to ensure reproducibility. Convolutional Neural Network developers compare their results via metrics. Aim: Introducing software capable of automatic multilevel comparison. A multilevel analysis covering segmentations and CRs builds on a generic software backend. Metrics and CRs are calculated with geometric accuracy. Segmentations and CRs are connected to track errors and their effects. An interactive GUI makes the software accessible to different users. The software's multilevel comparison was tested on a use case based on cardiac function assessment. The software shows good reader agreement in CRs and segmentation metrics (Dice > 90%). Decomposing differences by cardiac position revealed excellent agreement in midventricular slices: > 90% but poorer segmentations in apical (> 71%) and basal slices (> 74%). Further decomposition by contour type locates the largest millilitre differences in the basal right cavity (> 3 ml). Visual inspection shows these differences being caused by different basal slice choices. The software illuminated reader differences on several levels. Producing spreadsheets and figures concerning metric values and CR differences was automated. A multilevel reader comparison is feasible and extendable to other cardiac structures in the future
A Deep Learning Pipeline for Assessing Ventricular Volumes from a Cardiac Magnetic Resonance Image Registry of Single Ventricle Patients
Purpose: To develop an end-to-end deep learning (DL) pipeline for automated ventricular segmentation of cardiac MRI data from a multicenter registry of patients with Fontan circulation (FORCE). /
Materials and Methods: This retrospective study used 250 cardiac MRI examinations (November 2007–December 2022) from 13 institutions for training, validation, and testing. The pipeline contained three DL models: a classifier to identify short-axis cine stacks and two UNet 3+ models for image cropping and segmentation. The automated segmentations were evaluated on the test set (n = 50) using the Dice score. Volumetric and functional metrics derived from DL and ground truth manual segmentations were compared using Bland-Altman and intraclass correlation analysis. The pipeline was further qualitatively evaluated on 475 unseen examinations. /
Results: There were acceptable limits of agreement (LOA) and minimal biases between the ground truth and DL end-diastolic volume (EDV) (Bias: -0.6 mL/m2, LOA: -20.6–19.5 mL/m2), and end-systolic volume (ESV) (Bias: - 1.1 mL/m2, LOA: -18.1–15.9 mL/m2), with high intraclass correlation coefficients (ICC > 0.97) and Dice scores (EDV, 0.91 and ESV, 0.86). There was moderate agreement for ventricular mass (Bias: -1.9 g/m2, LOA: -17.3–13.5 g/m2) and a ICC (0.94). There was also acceptable agreement for stroke volume (Bias:0.6 mL/m2, LOA: -17.2–18.3 mL/m2) and ejection fraction (Bias:0.6%, LOA: -12.2%–13.4%), with high ICCs (> 0.81). The pipeline achieved satisfactory segmentation in 68% of the 475 unseen examinations, while 26% needed minor adjustments, 5% needed major adjustments, and in 0.4%, the cropping model failed. /
Conclusion: The DL pipeline can provide fast standardized segmentation for patients with single ventricle physiology across multiple centers. This pipeline can be applied to all cardiac MRI examinations in the FORCE registry
Fabric Image Representation Encoding Networks for Large-scale 3D Medical Image Analysis
Deep neural networks are parameterised by weights that encode feature
representations, whose performance is dictated through generalisation by using
large-scale feature-rich datasets. The lack of large-scale labelled 3D medical
imaging datasets restrict constructing such generalised networks. In this work,
a novel 3D segmentation network, Fabric Image Representation Networks
(FIRENet), is proposed to extract and encode generalisable feature
representations from multiple medical image datasets in a large-scale manner.
FIRENet learns image specific feature representations by way of 3D fabric
network architecture that contains exponential number of sub-architectures to
handle various protocols and coverage of anatomical regions and structures. The
fabric network uses Atrous Spatial Pyramid Pooling (ASPP) extended to 3D to
extract local and image-level features at a fine selection of scales. The
fabric is constructed with weighted edges allowing the learnt features to
dynamically adapt to the training data at an architecture level. Conditional
padding modules, which are integrated into the network to reinsert voxels
discarded by feature pooling, allow the network to inherently process
different-size images at their original resolutions. FIRENet was trained for
feature learning via automated semantic segmentation of pelvic structures and
obtained a state-of-the-art median DSC score of 0.867. FIRENet was also
simultaneously trained on MR (Magnatic Resonance) images acquired from 3D
examinations of musculoskeletal elements in the (hip, knee, shoulder) joints
and a public OAI knee dataset to perform automated segmentation of bone across
anatomy. Transfer learning was used to show that the features learnt through
the pelvic segmentation helped achieve improved mean DSC scores of 0.962,
0.963, 0.945 and 0.986 for automated segmentation of bone across datasets.Comment: 12 pages, 10 figure