2,129 research outputs found
Probabilistic Intra-Retinal Layer Segmentation in 3-D OCT Images Using Global Shape Regularization
With the introduction of spectral-domain optical coherence tomography (OCT),
resulting in a significant increase in acquisition speed, the fast and accurate
segmentation of 3-D OCT scans has become evermore important. This paper
presents a novel probabilistic approach, that models the appearance of retinal
layers as well as the global shape variations of layer boundaries. Given an OCT
scan, the full posterior distribution over segmentations is approximately
inferred using a variational method enabling efficient probabilistic inference
in terms of computationally tractable model components: Segmenting a full 3-D
volume takes around a minute. Accurate segmentations demonstrate the benefit of
using global shape regularization: We segmented 35 fovea-centered 3-D volumes
with an average unsigned error of 2.46 0.22 {\mu}m as well as 80 normal
and 66 glaucomatous 2-D circular scans with errors of 2.92 0.53 {\mu}m
and 4.09 0.98 {\mu}m respectively. Furthermore, we utilized the inferred
posterior distribution to rate the quality of the segmentation, point out
potentially erroneous regions and discriminate normal from pathological scans.
No pre- or postprocessing was required and we used the same set of parameters
for all data sets, underlining the robustness and out-of-the-box nature of our
approach.Comment: Accepted for publication in Medical Image Analysis (MIA), Elsevie
Recommended from our members
When the machine does not know measuring uncertainty in deep learning models of medical images
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonRecently, Deep learning (DL), which involves powerful black box predictors, has outperformed
human experts in several medical diagnostic problems. However, these methods focus
exclusively on improving the accuracy of point predictions without assessing their outputs’
quality and ignore the asymmetric cost involved in different types of misclassification errors.
Neural networks also do not deliver confidence in predictions and suffer from over and
under confidence, i.e. are not well calibrated. Knowing how much confidence there is in a
prediction is essential for gaining clinicians’ trust in the technology.
Calibrated uncertainty quantification is a challenging problem as no ground truth is
available. To address this, we make two observations: (i) cost-sensitive deep neural networks
with Dropweights models better quantify calibrated predictive uncertainty, and (ii) estimated
uncertainty with point predictions in Deep Ensembles Bayesian Neural Networks with
DropWeights can lead to a more informed decision and improve prediction quality.
This dissertation focuses on quantifying uncertainty using concepts from cost-sensitive
neural networks, calibration of confidence, and Dropweights ensemble method. First, we
show how to improve predictive uncertainty by deep ensembles of neural networks with Dropweights
learning an approximate distribution over its weights in medical image segmentation
and its application in active learning. Second, we use the Jackknife resampling technique
to correct bias in quantified uncertainty in image classification and propose metrics to measure
uncertainty performance. The third part of the thesis is motivated by the discrepancy
between the model predictive error and the objective in quantified uncertainty when costs for
misclassification errors or unbalanced datasets are asymmetric. We develop cost-sensitive
modifications of the neural networks in disease detection and propose metrics to measure the
quality of quantified uncertainty. Finally, we leverage an adaptive binning strategy to measure
uncertainty calibration error that directly corresponds to estimated uncertainty performance
and address problematic evaluation methods.
We evaluate the effectiveness of the tools on nuclei images segmentation, multi-class
Brain MRI image classification, multi-level cell type-specific protein expression prediction in
ImmunoHistoChemistry (IHC) images and cost-sensitive classification for Covid-19 detection
from X-Rays and CT image dataset. Our approach is thoroughly validated by measuring the
quality of uncertainty. It produces an equally good or better result and paves the way for the
future that addresses the practical problems at the intersection of deep learning and Bayesian
decision theory.
In conclusion, our study highlights the opportunities and challenges of the application of
estimated uncertainty in deep learning models of medical images, representing the confidence of the model’s prediction, and the uncertainty quality metrics show a significant improvement
when using Deep Ensembles Bayesian Neural Networks with DropWeights
Combining local regularity estimation and total variation optimization for scale-free texture segmentation
Texture segmentation constitutes a standard image processing task, crucial to
many applications. The present contribution focuses on the particular subset of
scale-free textures and its originality resides in the combination of three key
ingredients: First, texture characterization relies on the concept of local
regularity ; Second, estimation of local regularity is based on new multiscale
quantities referred to as wavelet leaders ; Third, segmentation from local
regularity faces a fundamental bias variance trade-off: In nature, local
regularity estimation shows high variability that impairs the detection of
changes, while a posteriori smoothing of regularity estimates precludes from
locating correctly changes. Instead, the present contribution proposes several
variational problem formulations based on total variation and proximal
resolutions that effectively circumvent this trade-off. Estimation and
segmentation performance for the proposed procedures are quantified and
compared on synthetic as well as on real-world textures
Nonparametric neighborhood statistics for MRI denoising
technical reportThis paper presents a novel method for denoising MR images that relies on an optimal estimation, combining a likelihood model with an adaptive image prior. The method models images as random fields and exploits the properties of independent Rician noise to learn the higher-order statistics of image neighborhoods from corrupted input data. It uses these statistics as priors within a Bayesian denoising framework. This paper presents an information-theoretic method for characterizing neighborhood structure using nonparametric density estimation. The formulation generalizes easily to simultaneous denoising of multimodal MRI, exploiting the relationships between modalities to further enhance performance. The method, relying on the information content of input data for noise estimation and setting important parameters, does not require significant parameter tuning. Qualitative and quantitative results on real, simulated, and multimodal data, including comparisons with other approaches, demonstrate the effectiveness of the method
Noise Estimation, Noise Reduction and Intensity Inhomogeneity Correction in MRI Images of the Brain
Rician noise and intensity inhomogeneity are two common types of image degradation that manifest in the acquisition of magnetic resonance imaging (MRI) system images of the brain. Many noise reduction and intensity inhomogeneity correction algorithms are based on strong parametric assumptions. These parametric assumptions are generic and do not account for salient features that are unique to specific classes and different levels of degradation in natural images.
This thesis proposes the 4-neighborhood clique system in a layer-structured Markov random field (MRF) model for noise estimation and noise reduction. When the test image is the only physical system under consideration, it is regarded as a single layer Markov random field (SLMRF) model, and as a double layer MRF model when the test images and classical priors are considered.
A scientific principle states that segmentation trivializes the task of bias field correction. Another principle states that the bias field distorts the intensity but not the spatial attribute of an image. This thesis exploits these two widely acknowledged scientific principles in order to propose a new model for correction of intensity inhomogeneity.
The noise estimation algorithm is invariant to the presence or absence of background features in an image and more accurate in the estimation of noise levels because it is potentially immune to the modeling errors inherent in some current state-of-the-art algorithms. The noise reduction algorithm derived from the SLMRF model does not incorporate a regularization parameter. Furthermore, it preserves edges, and its output is devoid of the blurring and ringing artifacts associated with Gaussian and wavelet based algorithms. The procedure for correction of intensity inhomogeneity does not require the computationally intensive task of estimation of the bias field map. Furthermore, there is no requirement for a digital brain atlas which will incorporate additional image processing tasks such as image registration
- …