4,610 research outputs found
Analyzing the Quality and Challenges of Uncertainty Estimations for Brain Tumor Segmentation
Automatic segmentation of brain tumors has the potential to enable volumetric measures and high-throughput analysis in the clinical setting. Reaching this potential seems almost achieved, considering the steady increase in segmentation accuracy. However, despite segmentation accuracy, the current methods still do not meet the robustness levels required for patient-centered clinical use. In this regard, uncertainty estimates are a promising direction to improve the robustness of automated segmentation systems. Different uncertainty estimation methods have been proposed, but little is known about their usefulness and limitations for brain tumor segmentation. In this study, we present an analysis of the most commonly used uncertainty estimation methods in regards to benefits and challenges for brain tumor segmentation. We evaluated their quality in terms of calibration, segmentation error localization, and segmentation failure detection. Our results show that the uncertainty methods are typically well-calibrated when evaluated at the dataset level. Evaluated at the subject level, we found notable miscalibrations and limited segmentation error localization (e.g., for correcting segmentations), which hinder the direct use of the voxel-wise uncertainties. Nevertheless, voxel-wise uncertainty showed value to detect failed segmentations when uncertainty estimates are aggregated at the subject level. Therefore, we suggest a careful usage of voxel-wise uncertainty measures and highlight the importance of developing solutions that address the subject-level requirements on calibration and segmentation error localization
Uncertainty categories in medical image segmentation: a study of source-related diversity
Measuring uncertainties in the output of a deep learning method is useful in
several ways, such as in assisting with interpretation of the outputs, helping
build confidence with end users, and for improving the training and performance
of the networks. Several different methods have been proposed to estimate
uncertainties, including those from epistemic (relating to the model used) and
aleatoric (relating to the data) sources using test-time dropout and
augmentation, respectively. Not only are these uncertainty sources different,
but they are governed by parameter settings (e.g., dropout rate or type and
level of augmentation) that establish even more distinct uncertainty
categories. This work investigates how different the uncertainties are from
these categories, for magnitude and spatial pattern, to empirically address the
question of whether they provide usefully distinct information that should be
captured whenever uncertainties are used. We take the well characterised BraTS
challenge dataset to demonstrate that there are substantial differences in both
magnitude and spatial pattern of uncertainties from the different categories,
and discuss the implications of these in various use cases
Curved Gabor Filters for Fingerprint Image Enhancement
Gabor filters play an important role in many application areas for the
enhancement of various types of images and the extraction of Gabor features.
For the purpose of enhancing curved structures in noisy images, we introduce
curved Gabor filters which locally adapt their shape to the direction of flow.
These curved Gabor filters enable the choice of filter parameters which
increase the smoothing power without creating artifacts in the enhanced image.
In this paper, curved Gabor filters are applied to the curved ridge and valley
structure of low-quality fingerprint images. First, we combine two orientation
field estimation methods in order to obtain a more robust estimation for very
noisy images. Next, curved regions are constructed by following the respective
local orientation and they are used for estimating the local ridge frequency.
Lastly, curved Gabor filters are defined based on curved regions and they are
applied for the enhancement of low-quality fingerprint images. Experimental
results on the FVC2004 databases show improvements of this approach in
comparison to state-of-the-art enhancement methods
pymia: A Python package for data handling and evaluation in deep learning-based medical image analysis
Background and Objective: Deep learning enables tremendous progress in
medical image analysis. One driving force of this progress are open-source
frameworks like TensorFlow and PyTorch. However, these frameworks rarely
address issues specific to the domain of medical image analysis, such as 3-D
data handling and distance metrics for evaluation. pymia, an open-source Python
package, tries to address these issues by providing flexible data handling and
evaluation independent of the deep learning framework.
Methods: The pymia package provides data handling and evaluation
functionalities. The data handling allows flexible medical image handling in
every commonly used format (e.g., 2-D, 2.5-D, and 3-D; full- or patch-wise).
Even data beyond images like demographics or clinical reports can easily be
integrated into deep learning pipelines. The evaluation allows stand-alone
result calculation and reporting, as well as performance monitoring during
training using a vast amount of domain-specific metrics for segmentation,
reconstruction, and regression.
Results: The pymia package is highly flexible, allows for fast prototyping,
and reduces the burden of implementing data handling routines and evaluation
methods. While data handling and evaluation are independent of the deep
learning framework used, they can easily be integrated into TensorFlow and
PyTorch pipelines. The developed package was successfully used in a variety of
research projects for segmentation, reconstruction, and regression.
Conclusions: The pymia package fills the gap of current deep learning
frameworks regarding data handling and evaluation in medical image analysis. It
is available at https://github.com/rundherum/pymia and can directly be
installed from the Python Package Index using pip install pymia.Comment: first and last author contributed equall
Deep spectral learning for label-free optical imaging oximetry with uncertainty quantification
Measurement of blood oxygen saturation (sO2) by optical imaging oximetry provides invaluable insight into local tissue functions and metabolism. Despite different embodiments and modalities, all label-free optical-imaging oximetry techniques utilize the same principle of sO2-dependent spectral contrast from haemoglobin. Traditional approaches for quantifying sO2 often rely on analytical models that are fitted by the spectral measurements. These approaches in practice suffer from uncertainties due to biological variability, tissue geometry, light scattering, systemic spectral bias, and variations in the experimental conditions. Here, we propose a new data-driven approach, termed deep spectral learning (DSL), to achieve oximetry that is highly robust to experimental variations and, more importantly, able to provide uncertainty quantification for each sO2 prediction. To demonstrate the robustness and generalizability of DSL, we analyse data from two visible light optical coherence tomography (vis-OCT) setups across two separate in vivo experiments on rat retinas. Predictions made by DSL are highly adaptive to experimental variabilities as well as the depth-dependent backscattering spectra. Two neural-network-based models are tested and compared with the traditional least-squares fitting (LSF) method. The DSL-predicted sO2 shows significantly lower mean-square errors than those of the LSF. For the first time, we have demonstrated en face maps of retinal oximetry along with a pixel-wise confidence assessment. Our DSL overcomes several limitations of traditional approaches and provides a more flexible, robust, and reliable deep learning approach for in vivo non-invasive label-free optical oximetry.R01 CA224911 - NCI NIH HHS; R01 CA232015 - NCI NIH HHS; R01 NS108464 - NINDS NIH HHS; R21 EY029412 - NEI NIH HHSAccepted manuscrip
Self-training with dual uncertainty for semi-supervised medical image segmentation
In the field of semi-supervised medical image segmentation, the shortage of
labeled data is the fundamental problem. How to effectively learn image
features from unlabeled images to improve segmentation accuracy is the main
research direction in this field. Traditional self-training methods can
partially solve the problem of insufficient labeled data by generating pseudo
labels for iterative training. However, noise generated due to the model's
uncertainty during training directly affects the segmentation results.
Therefore, we added sample-level and pixel-level uncertainty to stabilize the
training process based on the self-training framework. Specifically, we saved
several moments of the model during pre-training, and used the difference
between their predictions on unlabeled samples as the sample-level uncertainty
estimate for that sample. Then, we gradually add unlabeled samples from easy to
hard during training. At the same time, we added a decoder with different
upsampling methods to the segmentation network and used the difference between
the outputs of the two decoders as pixel-level uncertainty. In short, we
selectively retrained unlabeled samples and assigned pixel-level uncertainty to
pseudo labels to optimize the self-training process. We compared the
segmentation results of our model with five semi-supervised approaches on the
public 2017 ACDC dataset and 2018 Prostate dataset. Our proposed method
achieves better segmentation performance on both datasets under the same
settings, demonstrating its effectiveness, robustness, and potential
transferability to other medical image segmentation tasks. Keywords: Medical
image segmentation, semi-supervised learning, self-training, uncertainty
estimatio
- …