1,160 research outputs found
Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations
Deep-learning has proved in recent years to be a powerful tool for image
analysis and is now widely used to segment both 2D and 3D medical images.
Deep-learning segmentation frameworks rely not only on the choice of network
architecture but also on the choice of loss function. When the segmentation
process targets rare observations, a severe class imbalance is likely to occur
between candidate labels, thus resulting in sub-optimal performance. In order
to mitigate this issue, strategies such as the weighted cross-entropy function,
the sensitivity function or the Dice loss function, have been proposed. In this
work, we investigate the behavior of these loss functions and their sensitivity
to learning rate tuning in the presence of different rates of label imbalance
across 2D and 3D segmentation tasks. We also propose to use the class
re-balancing properties of the Generalized Dice overlap, a known metric for
segmentation assessment, as a robust and accurate deep-learning loss function
for unbalanced tasks
Elastic Registration of Geodesic Vascular Graphs
Vascular graphs can embed a number of high-level features, from morphological
parameters, to functional biomarkers, and represent an invaluable tool for
longitudinal and cross-sectional clinical inference. This, however, is only
feasible when graphs are co-registered together, allowing coherent multiple
comparisons. The robust registration of vascular topologies stands therefore as
key enabling technology for group-wise analyses. In this work, we present an
end-to-end vascular graph registration approach, that aligns networks with
non-linear geometries and topological deformations, by introducing a novel
overconnected geodesic vascular graph formulation, and without enforcing any
anatomical prior constraint. The 3D elastic graph registration is then
performed with state-of-the-art graph matching methods used in computer vision.
Promising results of vascular matching are found using graphs from synthetic
and real angiographies. Observations and future designs are discussed towards
potential clinical applications
Simultaneous synthesis of FLAIR and segmentation of white matter hypointensities from T1 MRIs
Segmenting vascular pathologies such as white matter lesions in Brain
magnetic resonance images (MRIs) require acquisition of multiple sequences such
as T1-weighted (T1-w) --on which lesions appear hypointense-- and fluid
attenuated inversion recovery (FLAIR) sequence --where lesions appear
hyperintense--. However, most of the existing retrospective datasets do not
consist of FLAIR sequences. Existing missing modality imputation methods
separate the process of imputation, and the process of segmentation. In this
paper, we propose a method to link both modality imputation and segmentation
using convolutional neural networks. We show that by jointly optimizing the
imputation network and the segmentation network, the method not only produces
more realistic synthetic FLAIR images from T1-w images, but also improves the
segmentation of WMH from T1-w images only.Comment: Conference on Medical Imaging with Deep Learning MIDL 201
Robust training of recurrent neural networks to handle missing data for disease progression modeling
Disease progression modeling (DPM) using longitudinal data is a challenging
task in machine learning for healthcare that can provide clinicians with better
tools for diagnosis and monitoring of disease. Existing DPM algorithms neglect
temporal dependencies among measurements and make parametric assumptions about
biomarker trajectories. In addition, they do not model multiple biomarkers
jointly and need to align subjects' trajectories. In this paper, recurrent
neural networks (RNNs) are utilized to address these issues. However, in many
cases, longitudinal cohorts contain incomplete data, which hinders the
application of standard RNNs and requires a pre-processing step such as
imputation of the missing values. We, therefore, propose a generalized training
rule for the most widely used RNN architecture, long short-term memory (LSTM)
networks, that can handle missing values in both target and predictor
variables. This algorithm is applied for modeling the progression of
Alzheimer's disease (AD) using magnetic resonance imaging (MRI) biomarkers. The
results show that the proposed LSTM algorithm achieves a lower mean absolute
error for prediction of measurements across all considered MRI biomarkers
compared to using standard LSTM networks with data imputation or using a
regression-based DPM method. Moreover, applying linear discriminant analysis to
the biomarkers' values predicted by the proposed algorithm results in a larger
area under the receiver operating characteristic curve (AUC) for clinical
diagnosis of AD compared to the same alternatives, and the AUC is comparable to
state-of-the-art AUCs from a recent cross-sectional medical image
classification challenge. This paper shows that built-in handling of missing
values in LSTM network training paves the way for application of RNNs in
disease progression modeling.Comment: 9 pages, 1 figure, MIDL conferenc
Training recurrent neural networks robust to incomplete data: application to Alzheimer's disease progression modeling
Disease progression modeling (DPM) using longitudinal data is a challenging
machine learning task. Existing DPM algorithms neglect temporal dependencies
among measurements, make parametric assumptions about biomarker trajectories,
do not model multiple biomarkers jointly, and need an alignment of subjects'
trajectories. In this paper, recurrent neural networks (RNNs) are utilized to
address these issues. However, in many cases, longitudinal cohorts contain
incomplete data, which hinders the application of standard RNNs and requires a
pre-processing step such as imputation of the missing values. Instead, we
propose a generalized training rule for the most widely used RNN architecture,
long short-term memory (LSTM) networks, that can handle both missing predictor
and target values. The proposed LSTM algorithm is applied to model the
progression of Alzheimer's disease (AD) using six volumetric magnetic resonance
imaging (MRI) biomarkers, i.e., volumes of ventricles, hippocampus, whole
brain, fusiform, middle temporal gyrus, and entorhinal cortex, and it is
compared to standard LSTM networks with data imputation and a parametric,
regression-based DPM method. The results show that the proposed algorithm
achieves a significantly lower mean absolute error (MAE) than the alternatives
with p < 0.05 using Wilcoxon signed rank test in predicting values of almost
all of the MRI biomarkers. Moreover, a linear discriminant analysis (LDA)
classifier applied to the predicted biomarker values produces a significantly
larger AUC of 0.90 vs. at most 0.84 with p < 0.001 using McNemar's test for
clinical diagnosis of AD. Inspection of MAE curves as a function of the amount
of missing data reveals that the proposed LSTM algorithm achieves the best
performance up until more than 74% missing values. Finally, it is illustrated
how the method can successfully be applied to data with varying time intervals.Comment: arXiv admin note: substantial text overlap with arXiv:1808.0550
Solid NURBS Conforming Scaffolding for Isogeometric Analysis
This work introduces a scaffolding framework to compactly parametrise solid structures with conforming NURBS elements for isogeometric analysis. A novel formulation introduces a topological, geometrical and parametric subdivision of the space in a minimal plurality of conforming vectorial elements. These determine a multi-compartmental scaffolding for arbitrary branching patterns. A solid smoothing paradigm is devised for the conforming scaffolding achieving higher than positional geometrical and parametric continuity. Results are shown for synthetic shapes of varying complexity, for modular CAD geometries, for branching structures from tessellated meshes and for organic biological structures from imaging data. Representative simulations demonstrate the validity of the introduced scaffolding framework with scalable performance and groundbreaking applications for isogeometric analysis
A Heteroscedastic Uncertainty Model for Decoupling Sources of MRI Image Quality
Quality control (QC) of medical images is essential to ensure that downstream
analyses such as segmentation can be performed successfully. Currently, QC is
predominantly performed visually at significant time and operator cost. We aim
to automate the process by formulating a probabilistic network that estimates
uncertainty through a heteroscedastic noise model, hence providing a proxy
measure of task-specific image quality that is learnt directly from the data.
By augmenting the training data with different types of simulated k-space
artefacts, we propose a novel cascading CNN architecture based on a
student-teacher framework to decouple sources of uncertainty related to
different k-space augmentations in an entirely self-supervised manner. This
enables us to predict separate uncertainty quantities for the different types
of data degradation. While the uncertainty measures reflect the presence and
severity of image artefacts, the network also provides the segmentation
predictions given the quality of the data. We show models trained with
simulated artefacts provide informative measures of uncertainty on real-world
images and we validate our uncertainty predictions on problematic images
identified by human-raters
- …