295 research outputs found
Generative Models for Preprocessing of Hospital Brain Scans
I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions
Detection and classification of neurodegenerative diseases: a spatially informed bayesian deep learning approach
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesNeurodegenerative diseases comprise a group of chronic and irreversible conditions
characterized by the progressive degeneration of the structure and function
of the central nervous system. The detection and classification of patients according
to the underlying disease are crucial for developing oriented treatments and
enriching prognosis. In this context, Magnetic resonance imaging (MRI) data can
provide meaningful insights into neurodegeneration by detecting the physiological
manifestations in the brain caused by the disease processes. One field of extensive
clinical use of MRI is the accurate and automated classification of neurodegenerative
disorders. Most studies distinguish patients from healthy subjects or stages
within the same disease. Such distinction does not mirror clinical practice, as a
patient may not show all symptoms, especially if the disease is in an early stage,
or show, due to comorbidities, other symptoms as well. Likewise, automated
classifiers are partly suited for medical diagnosis since they cannot produce probabilistic
predictions nor account for uncertainty. Also, existent studies ignore the
spatial heterogeneity of the brain alterations caused by neurodegenerative processes.
The spatial configuration of the neuronal loss is a characteristic hallmark
for each disorder. To fill these gaps, this thesis aims to develop a classification
technique that incorporates uncertainty and spatial information for distinguishing
four neurodegenerative diseases, Alzheimer’s disease, Mild cognitive impairment,
Parkinson’s disease and Multiple Sclerosis, and healthy subjects. This technique
will produce automated, contingent, and accurate predictions to support clinical
diagnosis.
To quantify prediction uncertainty and improve classification accuracy, this study
introduces a Bayesian neural network with a spatially informed input. A convolutional
neural network (CNN) is developed to identify a neurodegenerative
condition based on T1weighted MRI scans from patients and healthy controls.
Bayesian inference is incorporated into the CNN to measure uncertainty and produce
probabilistic predictions. Also, a spatially informed MRI scan is added to
the CNN to improve feature detection and classification accuracy.
The Spatially informed Bayesian Neural Network (SBNN) proposed in this work
demonstrates that classification accuracy can be increased up to 25% by including
the spatially informed MRI scan. Furthermore, the SBNN provides robust
probabilistic diagnosis that resembles clinical decision-making and accounts for
atypical, numerous, and early presentations of neurodegenerative disorders
Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review
Deep learning has become a popular tool for medical image analysis, but the
limited availability of training data remains a major challenge, particularly
in the medical field where data acquisition can be costly and subject to
privacy regulations. Data augmentation techniques offer a solution by
artificially increasing the number of training samples, but these techniques
often produce limited and unconvincing results. To address this issue, a
growing number of studies have proposed the use of deep generative models to
generate more realistic and diverse data that conform to the true distribution
of the data. In this review, we focus on three types of deep generative models
for medical image augmentation: variational autoencoders, generative
adversarial networks, and diffusion models. We provide an overview of the
current state of the art in each of these models and discuss their potential
for use in different downstream tasks in medical imaging, including
classification, segmentation, and cross-modal translation. We also evaluate the
strengths and limitations of each model and suggest directions for future
research in this field. Our goal is to provide a comprehensive review about the
use of deep generative models for medical image augmentation and to highlight
the potential of these models for improving the performance of deep learning
algorithms in medical image analysis
Recommended from our members
When the machine does not know measuring uncertainty in deep learning models of medical images
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonRecently, Deep learning (DL), which involves powerful black box predictors, has outperformed
human experts in several medical diagnostic problems. However, these methods focus
exclusively on improving the accuracy of point predictions without assessing their outputs’
quality and ignore the asymmetric cost involved in different types of misclassification errors.
Neural networks also do not deliver confidence in predictions and suffer from over and
under confidence, i.e. are not well calibrated. Knowing how much confidence there is in a
prediction is essential for gaining clinicians’ trust in the technology.
Calibrated uncertainty quantification is a challenging problem as no ground truth is
available. To address this, we make two observations: (i) cost-sensitive deep neural networks
with Dropweights models better quantify calibrated predictive uncertainty, and (ii) estimated
uncertainty with point predictions in Deep Ensembles Bayesian Neural Networks with
DropWeights can lead to a more informed decision and improve prediction quality.
This dissertation focuses on quantifying uncertainty using concepts from cost-sensitive
neural networks, calibration of confidence, and Dropweights ensemble method. First, we
show how to improve predictive uncertainty by deep ensembles of neural networks with Dropweights
learning an approximate distribution over its weights in medical image segmentation
and its application in active learning. Second, we use the Jackknife resampling technique
to correct bias in quantified uncertainty in image classification and propose metrics to measure
uncertainty performance. The third part of the thesis is motivated by the discrepancy
between the model predictive error and the objective in quantified uncertainty when costs for
misclassification errors or unbalanced datasets are asymmetric. We develop cost-sensitive
modifications of the neural networks in disease detection and propose metrics to measure the
quality of quantified uncertainty. Finally, we leverage an adaptive binning strategy to measure
uncertainty calibration error that directly corresponds to estimated uncertainty performance
and address problematic evaluation methods.
We evaluate the effectiveness of the tools on nuclei images segmentation, multi-class
Brain MRI image classification, multi-level cell type-specific protein expression prediction in
ImmunoHistoChemistry (IHC) images and cost-sensitive classification for Covid-19 detection
from X-Rays and CT image dataset. Our approach is thoroughly validated by measuring the
quality of uncertainty. It produces an equally good or better result and paves the way for the
future that addresses the practical problems at the intersection of deep learning and Bayesian
decision theory.
In conclusion, our study highlights the opportunities and challenges of the application of
estimated uncertainty in deep learning models of medical images, representing the confidence of the model’s prediction, and the uncertainty quality metrics show a significant improvement
when using Deep Ensembles Bayesian Neural Networks with DropWeights
Learning the dynamics and time-recursive boundary detection of deformable objects
We propose a principled framework for recursively segmenting deformable objects across a sequence
of frames. We demonstrate the usefulness of this method on left ventricular segmentation across a cardiac
cycle. The approach involves a technique for learning the system dynamics together with methods of
particle-based smoothing as well as non-parametric belief propagation on a loopy graphical model capturing
the temporal periodicity of the heart. The dynamic system state is a low-dimensional representation
of the boundary, and the boundary estimation involves incorporating curve evolution into recursive state
estimation. By formulating the problem as one of state estimation, the segmentation at each particular
time is based not only on the data observed at that instant, but also on predictions based on past and future
boundary estimates. Although the paper focuses on left ventricle segmentation, the method generalizes
to temporally segmenting any deformable object
A comparative evaluation for liver segmentation from spir images and a novel level set method using signed pressure force function
Thesis (Doctoral)--Izmir Institute of Technology, Electronics and Communication Engineering, Izmir, 2013Includes bibliographical references (leaves: 118-135)Text in English; Abstract: Turkish and Englishxv, 145 leavesDeveloping a robust method for liver segmentation from magnetic resonance images is a challenging task due to similar intensity values between adjacent organs, geometrically complex liver structure and injection of contrast media, which causes all tissues to have different gray level values. Several artifacts of pulsation and motion, and partial volume effects also increase difficulties for automatic liver segmentation from magnetic resonance images. In this thesis, we present an overview about liver segmentation methods in magnetic resonance images and show comparative results of seven different liver segmentation approaches chosen from deterministic (K-means based), probabilistic (Gaussian model based), supervised neural network (multilayer perceptron based) and deformable model based (level set) segmentation methods. The results of qualitative and quantitative analysis using sensitivity, specificity and accuracy metrics show that the multilayer perceptron based approach and a level set based approach which uses a distance regularization term and signed pressure force function are reasonable methods for liver segmentation from spectral pre-saturation inversion recovery images. However, the multilayer perceptron based segmentation method requires a higher computational cost. The distance regularization term based automatic level set method is very sensitive to chosen variance of Gaussian function. Our proposed level set based method that uses a novel signed pressure force function, which can control the direction and velocity of the evolving active contour, is faster and solves several problems of other applied methods such as sensitivity to initial contour or variance parameter of the Gaussian kernel in edge stopping functions without using any regularization term
Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging
Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community
A Probabilistic Approach To Non-Rigid Medical Image Registration
Non-rigid image registration is an important tool for analysing morphometric differences in subjects with Alzheimer's disease from structural magnetic resonance images of the brain. This thesis describes a novel probabilistic approach to non-rigid registration of medical images, and explores the benefits of its use in this area of neuroimaging. Many image registration approaches have been developed for neuroimaging. The vast majority suffer from two limitations: Firstly, the trade-off between image fidelity and regularisation requires selection. Secondly, only a point-estimate of the mapping between images is inferred, overlooking the presence of uncertainty in the estimation. This thesis introduces a novel probabilistic non-rigid registration model and inference scheme. This framework allows the inference of the parameters that control the level of regularisation, and data fidelity in a data-driven fashion. To allow greater flexibility, this model is extended to allow the level of data fidelity to vary across space. A benefit of this approach, is that the registration can adapt to anatomical variability and other image acquisition differences. A further advantage of the proposed registration framework is that it provides an estimate of the distribution of probable transformations. Additional novel contributions of this thesis include two proposals for exploiting the estimated registration uncertainty. The first of these estimates a local image smoothing filter, which is based on the registration uncertainty. The second approach incorporates the distribution of transformations into an ensemble learning scheme for statistical prediction. These techniques are integrated into standard frameworks for morphometric analysis, and are demonstrated to improve the ability to distinguish subjects with Alzheimer's disease from healthy controls
Computer-Aided Assessment of Tuberculosis with Radiological Imaging: From rule-based methods to Deep Learning
Mención Internacional en el tÃtulo de doctorTuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis (Mtb.)
that produces pulmonary damage due to its airborne nature. This fact facilitates the disease
fast-spreading, which, according to the World Health Organization (WHO), in 2021 caused
1.2 million deaths and 9.9 million new cases.
Traditionally, TB has been considered a binary disease (latent/active) due to the limited
specificity of the traditional diagnostic tests. Such a simple model causes difficulties in the
longitudinal assessment of pulmonary affectation needed for the development of novel drugs
and to control the spread of the disease.
Fortunately, X-Ray Computed Tomography (CT) images enable capturing specific manifestations
of TB that are undetectable using regular diagnostic tests, which suffer from
limited specificity. In conventional workflows, expert radiologists inspect the CT images.
However, this procedure is unfeasible to process the thousands of volume images belonging
to the different TB animal models and humans required for a suitable (pre-)clinical trial.
To achieve suitable results, automatization of different image analysis processes is a
must to quantify TB. It is also advisable to measure the uncertainty associated with this
process and model causal relationships between the specific mechanisms that characterize
each animal model and its level of damage. Thus, in this thesis, we introduce a set of novel
methods based on the state of the art Artificial Intelligence (AI) and Computer Vision (CV).
Initially, we present an algorithm to assess Pathological Lung Segmentation (PLS) employing
an unsupervised rule-based model which was traditionally considered a needed
step before biomarker extraction. This procedure allows robust segmentation in a Mtb. infection
model (Dice Similarity Coefficient, DSC, 94%±4%, Hausdorff Distance, HD,
8.64mm±7.36mm) of damaged lungs with lesions attached to the parenchyma and affected
by respiratory movement artefacts.
Next, a Gaussian Mixture Model ruled by an Expectation-Maximization (EM) algorithm
is employed to automatically quantify the burden of Mtb.using biomarkers extracted from the
segmented CT images. This approach achieves a strong correlation (R2 ≈ 0.8) between our
automatic method and manual extraction. Consequently, Chapter 3 introduces a model to automate the identification of TB lesions
and the characterization of disease progression. To this aim, the method employs the
Statistical Region Merging algorithm to detect lesions subsequently characterized by texture
features that feed a Random Forest (RF) estimator. The proposed procedure enables a
selection of a simple but powerful model able to classify abnormal tissue.
The latest works base their methodology on Deep Learning (DL). Chapter 4 extends
the classification of TB lesions. Namely, we introduce a computational model to infer
TB manifestations present in each lung lobe of CT scans by employing the associated
radiologist reports as ground truth. We do so instead of using the classical manually delimited
segmentation masks. The model adjusts the three-dimensional architecture, V-Net, to a multitask
classification context in which loss function is weighted by homoscedastic uncertainty.
Besides, the method employs Self-Normalizing Neural Networks (SNNs) for regularization.
Our results are promising with a Root Mean Square Error of 1.14 in the number of nodules
and F1-scores above 0.85 for the most prevalent TB lesions (i.e., conglomerations, cavitations,
consolidations, trees in bud) when considering the whole lung.
In Chapter 5, we present a DL model capable of extracting disentangled information from
images of different animal models, as well as information of the mechanisms that generate
the CT volumes. The method provides the segmentation mask of axial slices from three
animal models of different species employing a single trained architecture. It also infers the
level of TB damage and generates counterfactual images. So, with this methodology, we
offer an alternative to promote generalization and explainable AI models.
To sum up, the thesis presents a collection of valuable tools to automate the quantification
of pathological lungs and moreover extend the methodology to provide more explainable
results which are vital for drug development purposes. Chapter 6 elaborates on these
conclusions.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidenta: MarÃa Jesús Ledesma Carbayo.- Secretario: David Expósito Singh.- Vocal: Clarisa Sánchez Gutiérre
- …