7 research outputs found
Uncertainty-aware multi-resolution whole-body MR to CT synthesis
Synthesising computed tomography (CT) images from magnetic resonance images (MRI) plays an important role in the field of medical image analysis, both for quantification and diagnostic purposes. Especially for brain applications, convolutional neural networks (CNNs) have proven to be a valuable tool in this image translation task, achieving state-of-the-art results. Full body image synthesis, however, remains largely uncharted territory, bearing many challenges including a limited field of view and large image size, complex spatial context and anatomical differences between time-elapsing image acquisitions. We propose a novel multi-resolution cascade 3D network for end-to-end full-body MR to CT synthesis. We show that our method outperforms popular CNNs like U-Net in 2D and 3D. We further propose to include uncertainty in our network as a measure of safety and to account for intrinsic noise and misalignment in the data
Uncertainty quantification in medical image synthesis
Machine learning approaches to medical image synthesis have shown
outstanding performance, but often do not convey uncertainty information. In this chapter, we survey uncertainty quantification methods in
medical image synthesis and advocate the use of uncertainty for improving clinicians’ trust in machine learning solutions. First, we describe basic
concepts in uncertainty quantification and discuss its potential benefits in
downstream applications. We then review computational strategies that
facilitate inference, and identify the main technical and clinical challenges.
We provide a first comprehensive review to inform how to quantify, communicate and use uncertainty in medical synthesis applications
Exploring variability in medical imaging
Although recent successes of deep learning and novel machine learning techniques improved the perfor-
mance of classification and (anomaly) detection in computer vision problems, the application of these
methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this
is the amount of variability that is encountered and encapsulated in human anatomy and subsequently
reflected in medical images. This fundamental factor impacts most stages in modern medical imaging
processing pipelines.
Variability of human anatomy makes it virtually impossible to build large datasets for each disease
with labels and annotation for fully supervised machine learning. An efficient way to cope with this is
to try and learn only from normal samples. Such data is much easier to collect. A case study of such
an automatic anomaly detection system based on normative learning is presented in this work. We
present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative
models, which are trained only utilising normal/healthy subjects.
However, despite the significant improvement in automatic abnormality detection systems, clinical
routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis
and localise abnormalities. Integrating human expert knowledge into the medical imaging processing
pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per-
spective of building an automated medical imaging system, it is still an open issue, to what extent
this kind of variability and the resulting uncertainty are introduced during the training of a model
and how it affects the final performance of the task. Consequently, it is very important to explore the
effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as
on the model’s performance in a specific machine learning task. A thorough investigation of this issue
is presented in this work by leveraging automated estimates for machine learning model uncertainty,
inter-observer variability and segmentation task performance in lung CT scan images.
Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging
was attempted. This state-of-the-art survey includes both conventional pattern recognition methods
and deep learning based methods. It is one of the first literature surveys attempted in the specific
research area.Open Acces
A survey of uncertainty in deep neural networks
Over the last decade, neural networks have reached almost every field of science and become a crucial part of various real world applications. Due to the increasing spread, confidence in neural network predictions has become more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over- or under-confidence, i.e. are badly calibrated. To overcome this, many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and various approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. For that, a comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and irreducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks (BNNs), ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for calibrating neural networks, and give an overview of existing baselines and available implementations. Different examples from the wide spectrum of challenges in the fields of medical image analysis, robotics, and earth observation give an idea of the needs and challenges regarding uncertainties in the practical applications of neural networks. Additionally, the practical limitations of uncertainty quantification methods in neural networks for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given
Recommended from our members
Online Anomaly Detection for Time Series. Towards Incorporating Feature Extraction, Model Uncertainty and Concept Drift Adaptation for Improving Anomaly Detection
Time series anomaly detection receives increasing research interest given
the growing number of data-rich application domains. Recent additions
to anomaly detection methods in research literature include deep learning
algorithms. The nature and performance of these algorithms in sequence
analysis enable them to learn hierarchical discriminating features
and time-series temporal nature. However, their performance is affected
by the speed at which the time series arrives, the use of a fixed threshold,
and the assumption of Gaussian distribution on the prediction error
to identify anomalous values. An exact parametric distribution is often
not directly relevant in many applications and it’s often difficult to select
an appropriate threshold that will differentiate anomalies with noise.
Thus, implementations need the Prediction Interval (PI) that quantifies the
level of uncertainty associated with the Deep Neural Network (DNN) point
forecasts, which helps in making a better-informed decision and mitigates
against false anomaly alerts. To achieve this, a new anomaly detection
method is proposed that computes the uncertainty in estimates using quantile
regression and used the quantile interval to identify anomalies. Similarly,
to handle the speed at which the data arrives, an online anomaly detection
method is proposed where a model is trained incrementally to adapt
to the concept drift that improves prediction. This is implemented using a
window-based strategy, in which a time series is broken into sliding windows
of sub-sequences as input to the model. To adapt to concept drift,
the model is updated when changes occur in the new arrival instances.
This is achieved by using anomaly likelihood which is computed using the
Q-function to define the abnormal degree of the current data point based
on the previous data points. Specifically, when concept drift occurs, the
proposed method will mark the current data point as anomalous. However,
when the abnormal behavior continues for a longer period of time,
the abnormal degree of the current data point will be low compared to the
previous data points using the likelihood. As such, the current data point is
added to the previous data to retrain the model which will allow the model
to learn the new characteristics of the data and hence adapt to the concept
changes thereby redefining the abnormal behavior. The proposed method
also incorporates feature extraction to capture structural patterns in the
time series. This is especially significant for multivariate time-series data,
for which there is a need to capture the complex temporal dependencies
that may exist between the variables. In summary, this thesis contributes
to the theory, design, and development of algorithms and models for the
detection of anomalies in both static and evolving time series data.
Several experiments were conducted, and the results obtained indicate the
significance of this research on offline and online anomaly detection in
both static and evolving time-series data. In chapter 3, the newly proposed
method (Deep Quantile Regression Anomaly Detection Method) is evaluated
and compared with six other prediction-based anomaly detection
methods that assume a normal distribution of prediction or reconstruction
error for the identification of anomalies. Results in the first part of
the experiment indicate that DQR-AD obtained relatively better precision
than all other methods which demonstrates the capability of the method
in detecting a higher number of anomalous points with low false positive
rates. Also, the results show that DQR-AD is approximately 2 – 3
times better than the DeepAnT which performs better than all the remaining
methods on all domains in the NAB dataset. In the second part of the
experiment, sMAP dataset is used with 4-dimensional features to demonstrate
the method on multivariate time-series data. Experimental result
shows DQR-AD have 10% better performance than AE on three datasets
(SMAP1, SMAP3, and SMAP5) and equal performance on the remaining
two datasets. In chapter 5, two levels of experiments were conducted
basis of false-positive rate and concept drift adaptation. In the first level
of the experiment, the result shows that online DQR-AD is 18% better
than both DQR-AD and VAE-LSTM on five NAB datasets. Similarly, results
in the second level of the experiment show that the online DQR-AD
method has better performance than five counterpart methods with a relatively
10% margin on six out of the seven NAB datasets. This result
demonstrates how concept drift adaptation strategies adopted in the proposed
online DQR-AD improve the performance of anomaly detection in
time series.Petroleum Technology Development Fund (PTDF