126 research outputs found
Automatic Segmentation and Disease Classification Using Cardiac Cine MR Images
Segmentation of the heart in cardiac cine MR is clinically used to quantify
cardiac function. We propose a fully automatic method for segmentation and
disease classification using cardiac cine MR images. A convolutional neural
network (CNN) was designed to simultaneously segment the left ventricle (LV),
right ventricle (RV) and myocardium in end-diastole (ED) and end-systole (ES)
images. Features derived from the obtained segmentations were used in a Random
Forest classifier to label patients as suffering from dilated cardiomyopathy,
hypertrophic cardiomyopathy, heart failure following myocardial infarction,
right ventricular abnormality, or no cardiac disease. The method was developed
and evaluated using a balanced dataset containing images of 100 patients, which
was provided in the MICCAI 2017 automated cardiac diagnosis challenge (ACDC).
The segmentation and classification pipeline were evaluated in a four-fold
stratified cross-validation. Average Dice scores between reference and
automatically obtained segmentations were 0.94, 0.88 and 0.87 for the LV, RV
and myocardium. The classifier assigned 91% of patients to the correct disease
category. Segmentation and disease classification took 5 s per patient. The
results of our study suggest that image-based diagnosis using cine MR cardiac
scans can be performed automatically with high accuracy.Comment: Accepted in STACOM Automated Cardiac Diagnosis Challenge 201
LaB-GATr: geometric algebra transformers for large biomedical surface and volume meshes
Many anatomical structures can be described by surface or volume meshes.
Machine learning is a promising tool to extract information from these 3D
models. However, high-fidelity meshes often contain hundreds of thousands of
vertices, which creates unique challenges in building deep neural network
architectures. Furthermore, patient-specific meshes may not be canonically
aligned which limits the generalisation of machine learning algorithms. We
propose LaB-GATr, a transfomer neural network with geometric tokenisation that
can effectively learn with large-scale (bio-)medical surface and volume meshes
through sequence compression and interpolation. Our method extends the recently
proposed geometric algebra transformer (GATr) and thus respects all Euclidean
symmetries, i.e. rotation, translation and reflection, effectively mitigating
the problem of canonical alignment between patients. LaB-GATr achieves
state-of-the-art results on three tasks in cardiovascular hemodynamics
modelling and neurodevelopmental phenotype prediction, featuring meshes of up
to 200,000 vertices. Our results demonstrate that LaB-GATr is a powerful
architecture for learning with high-fidelity meshes which has the potential to
enable interesting downstream applications. Our implementation is publicly
available
Deep Learning-Based Carotid Artery Vessel Wall Segmentation in Black-Blood MRI Using Anatomical Priors
Carotid artery vessel wall thickness measurement is an essential step in the
monitoring of patients with atherosclerosis. This requires accurate
segmentation of the vessel wall, i.e., the region between an artery's lumen and
outer wall, in black-blood magnetic resonance (MR) images. Commonly used
convolutional neural networks (CNNs) for semantic segmentation are suboptimal
for this task as their use does not guarantee a contiguous ring-shaped
segmentation. Instead, in this work, we cast vessel wall segmentation as a
multi-task regression problem in a polar coordinate system. For each carotid
artery in each axial image slice, we aim to simultaneously find two
non-intersecting nested contours that together delineate the vessel wall. CNNs
applied to this problem enable an inductive bias that guarantees ring-shaped
vessel walls. Moreover, we identify a problem-specific training data
augmentation technique that substantially affects segmentation performance. We
apply our method to segmentation of the internal and external carotid artery
wall, and achieve top-ranking quantitative results in a public challenge, i.e.,
a median Dice similarity coefficient of 0.813 for the vessel wall and median
Hausdorff distances of 0.552 mm and 0.776 mm for lumen and outer wall,
respectively. Moreover, we show how the method improves over a conventional
semantic segmentation approach. These results show that it is feasible to
automatically obtain anatomically plausible segmentations of the carotid vessel
wall with high accuracy.Comment: SPIE Medical Imaging 202
Implicit Neural Representations for Deformable Image Registration
Deformable medical image registration has in past years been revolutionized by the use of convolutional neural networks. These methods surpass conventional image registration techniques in speed but not in accuracy. Here, we present an alternative approach to leveraging neural networks for image registration. Instead of using a convolutional neural network to predict the transformation between images, we optimize a multi-layer perceptron to represent this transformation function. Using recent insights from differentiable rendering, we show how such an implicit deformable image registration (idir) model can be naturally combined with regularization terms based on standard automatic differentiation techniques. We demonstrate the effectiveness of this model on 4D chest CT registration in the DIR-LAB data set and find that a three-layer multi-layer perceptron with periodic activation functions outperforms all published deep learning-based results on this problem, without any folding and without the need for training data. The model is implemented using standard deep learning libraries and flexible enough to be extended to include different losses, regularizers, and optimization schemes.</p
Local Implicit Neural Representations for Multi-Sequence MRI Translation
In radiological practice, multi-sequence MRI is routinely acquired to
characterize anatomy and tissue. However, due to the heterogeneity of imaging
protocols and contra-indications to contrast agents, some MRI sequences, e.g.
contrast-enhanced T1-weighted image (T1ce), may not be acquired. This creates
difficulties for large-scale clinical studies for which heterogeneous datasets
are aggregated. Modern deep learning techniques have demonstrated the
capability of synthesizing missing sequences from existing sequences, through
learning from an extensive multi-sequence MRI dataset. In this paper, we
propose a novel MR image translation solution based on local implicit neural
representations. We split the available MRI sequences into local patches and
assign to each patch a local multi-layer perceptron (MLP) that represents a
patch in the T1ce. The parameters of these local MLPs are generated by a
hypernetwork based on image features. Experimental results and ablation studies
on the BraTS challenge dataset showed that the local MLPs are critical for
recovering fine image and tumor details, as they allow for local specialization
that is highly important for accurate image translation. Compared to a
classical pix2pix model, the proposed method demonstrated visual improvement
and significantly improved quantitative scores (MSE 0.86 x 10^-3 vs. 1.02 x
10^-3 and SSIM 94.9 vs 94.3
Coronary Artery Centerline Extraction in Cardiac CT Angiography Using a CNN-Based Orientation Classifier
Coronary artery centerline extraction in cardiac CT angiography (CCTA) images
is a prerequisite for evaluation of stenoses and atherosclerotic plaque. We
propose an algorithm that extracts coronary artery centerlines in CCTA using a
convolutional neural network (CNN).
A 3D dilated CNN is trained to predict the most likely direction and radius
of an artery at any given point in a CCTA image based on a local image patch.
Starting from a single seed point placed manually or automatically anywhere in
a coronary artery, a tracker follows the vessel centerline in two directions
using the predictions of the CNN. Tracking is terminated when no direction can
be identified with high certainty.
The CNN was trained using 32 manually annotated centerlines in a training set
consisting of 8 CCTA images provided in the MICCAI 2008 Coronary Artery
Tracking Challenge (CAT08). Evaluation using 24 test images of the CAT08
challenge showed that extracted centerlines had an average overlap of 93.7%
with 96 manually annotated reference centerlines. Extracted centerline points
were highly accurate, with an average distance of 0.21 mm to reference
centerline points. In a second test set consisting of 50 CCTA scans, 5,448
markers in the coronary arteries were used as seed points to extract single
centerlines. This showed strong correspondence between extracted centerlines
and manually placed markers. In a third test set containing 36 CCTA scans,
fully automatic seeding and centerline extraction led to extraction of on
average 92% of clinically relevant coronary artery segments.
The proposed method is able to accurately and efficiently determine the
direction and radius of coronary arteries. The method can be trained with
limited training data, and once trained allows fast automatic or interactive
extraction of coronary artery trees from CCTA images.Comment: Accepted in Medical Image Analysi
- …