51 research outputs found
Exploring variability in medical imaging
Although recent successes of deep learning and novel machine learning techniques improved the perfor-
mance of classification and (anomaly) detection in computer vision problems, the application of these
methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this
is the amount of variability that is encountered and encapsulated in human anatomy and subsequently
reflected in medical images. This fundamental factor impacts most stages in modern medical imaging
processing pipelines.
Variability of human anatomy makes it virtually impossible to build large datasets for each disease
with labels and annotation for fully supervised machine learning. An efficient way to cope with this is
to try and learn only from normal samples. Such data is much easier to collect. A case study of such
an automatic anomaly detection system based on normative learning is presented in this work. We
present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative
models, which are trained only utilising normal/healthy subjects.
However, despite the significant improvement in automatic abnormality detection systems, clinical
routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis
and localise abnormalities. Integrating human expert knowledge into the medical imaging processing
pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per-
spective of building an automated medical imaging system, it is still an open issue, to what extent
this kind of variability and the resulting uncertainty are introduced during the training of a model
and how it affects the final performance of the task. Consequently, it is very important to explore the
effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as
on the model’s performance in a specific machine learning task. A thorough investigation of this issue
is presented in this work by leveraging automated estimates for machine learning model uncertainty,
inter-observer variability and segmentation task performance in lung CT scan images.
Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging
was attempted. This state-of-the-art survey includes both conventional pattern recognition methods
and deep learning based methods. It is one of the first literature surveys attempted in the specific
research area.Open Acces
Unsupervised 3D Brain Anomaly Detection
Anomaly detection (AD) is the identification of data samples that do not fit
a learned data distribution. As such, AD systems can help physicians to
determine the presence, severity, and extension of a pathology. Deep generative
models, such as Generative Adversarial Networks (GANs), can be exploited to
capture anatomical variability. Consequently, any outlier (i.e., sample falling
outside of the learned distribution) can be detected as an abnormality in an
unsupervised fashion. By using this method, we can not only detect expected or
known lesions, but we can even unveil previously unrecognized biomarkers. To
the best of our knowledge, this study exemplifies the first AD approach that
can efficiently handle volumetric data and detect 3D brain anomalies in one
single model. Our proposal is a volumetric and high-detail extension of the 2D
f-AnoGAN model obtained by combining a state-of-the-art 3D GAN with refinement
training steps. In experiments using non-contrast computed tomography images
from traumatic brain injury (TBI) patients, the model detects and localizes TBI
abnormalities with an area under the ROC curve of ~75%. Moreover, we test the
potential of the method for detecting other anomalies such as low quality
images, preprocessing inaccuracies, artifacts, and even the presence of
post-operative signs (such as a craniectomy or a brain shunt). The method has
potential for rapidly labeling abnormalities in massive imaging datasets, as
well as identifying new biomarkers.Comment: Accepted at BrainLes Workshop in MICCAI 202
Uninformed Students: Student-Teacher Anomaly Detection with Discriminative Latent Embeddings
We introduce a powerful student-teacher framework for the challenging problem
of unsupervised anomaly detection and pixel-precise anomaly segmentation in
high-resolution images. Student networks are trained to regress the output of a
descriptive teacher network that was pretrained on a large dataset of patches
from natural images. This circumvents the need for prior data annotation.
Anomalies are detected when the outputs of the student networks differ from
that of the teacher network. This happens when they fail to generalize outside
the manifold of anomaly-free training data. The intrinsic uncertainty in the
student networks is used as an additional scoring function that indicates
anomalies. We compare our method to a large number of existing deep learning
based methods for unsupervised anomaly detection. Our experiments demonstrate
improvements over state-of-the-art methods on a number of real-world datasets,
including the recently introduced MVTec Anomaly Detection dataset that was
specifically designed to benchmark anomaly segmentation algorithms.Comment: Accepted to CVPR 202
GaIA: Graphical Information Gain based Attention Network for Weakly Supervised Point Cloud Semantic Segmentation
While point cloud semantic segmentation is a significant task in 3D scene
understanding, this task demands a time-consuming process of fully annotating
labels. To address this problem, recent studies adopt a weakly supervised
learning approach under the sparse annotation. Different from the existing
studies, this study aims to reduce the epistemic uncertainty measured by the
entropy for a precise semantic segmentation. We propose the graphical
information gain based attention network called GaIA, which alleviates the
entropy of each point based on the reliable information. The graphical
information gain discriminates the reliable point by employing relative entropy
between target point and its neighborhoods. We further introduce anchor-based
additive angular margin loss, ArcPoint. The ArcPoint optimizes the unlabeled
points containing high entropy towards semantically similar classes of the
labeled points on hypersphere space. Experimental results on S3DIS and
ScanNet-v2 datasets demonstrate our framework outperforms the existing weakly
supervised methods. We have released GaIA at https://github.com/Karel911/GaIA.Comment: WACV 2023 accepted pape
Leveraging the Bhattacharyya coefficient for uncertainty quantification in deep neural networks
Modern deep learning models achieve state-of-the-art results for many tasks in computer vision, such as image classification and segmentation. However, its adoption into high-risk applications, e.g. automated medical diagnosis systems, happens at a slow pace. One of the main reasons for this is that regular neural networks do not capture uncertainty. To assess uncertainty in classification, several techniques have been proposed casting neural network approaches in a Bayesian setting. Amongst these techniques, Monte Carlo dropout is by far the most popular. This particular technique estimates the moments of the output distribution through sampling with different dropout masks. The output uncertainty of a neural network is then approximated as the sample variance. In this paper, we highlight the limitations of such a variance-based uncertainty metric and propose an novel approach. Our approach is based on the overlap between output distributions of different classes. We show that our technique leads to a better approximation of the inter-class output confusion. We illustrate the advantages of our method using benchmark datasets. In addition, we apply our metric to skin lesion classification-a real-world use case-and show that this yields promising results
Improving Uncertainty Estimation With Semi-supervised Deep Learning for COVID-19 Detection Using Chest X-ray Images
In this work we implement a COVID-19 infection detection system based on chest Xray images with uncertainty estimation. Uncertainty estimation is vital for safe usage of computer aided diagnosis tools in medical applications. Model estimations with high uncertainty should be carefully analyzed by a trained radiologist. We aim to improve uncertainty estimations using unlabelled data through the MixMatch semi-supervised framework. We test popular uncertainty estimation approaches, comprising Softmax scores, Monte-Carlo dropout and deterministic uncertainty quantification. To compare the reliability of the uncertainty estimates, we propose the usage of the Jensen-Shannon distance between the uncertainty distributions of correct and incorrect estimations. This metric is statistically relevant, unlike most previously used metrics, which often ignore the distribution of the uncertainty estimations. Our test results show a significant improvement in uncertainty estimates when using unlabelled data. The best results are obtained with the use of the Monte Carlo dropout method
- …