17 research outputs found

    Kalibrierte prädiktive Unsicherheit in der medizinischen Bildgebung mit Bayesian Deep Learning

    Get PDF
    The use of medical imaging has revolutionized modern medicine over the last century. It has helped provide insight into human anatomy and physiology. Many diseases and pathologies can only be diagnosed with the use of imaging techniques. Due to increasing availability and the reduction of costs, the number of medical imaging examinations is continuously growing, resulting in a huge amount of data that has to be assessed by medical experts. Computers can be used to assist in and automate the process of medical image analysis. Recent advances in deep learning allow this to be done with reasonable accuracy and on a large scale. The biggest disadvantage of these methods in practice is their black-box nature. Although they achieve the highest accuracy, their acceptance in clinical practice may be limited by their lack of interpretability and transparency. These concerns are reinforced by the core problem that this dissertation addresses: the overconfidence of deep models in incorrect predictions. How do we know if we do not know? This thesis deals with Bayesian methods for estimation of predictive uncertainty in medical imaging with deep learning. We show that the uncertainty from variational Bayesian inference is miscalibrated and does not represent the predictive error well. To quantify miscalibration, we propose the uncertainty calibration error, which alleviates disadvantages of existing calibration metrics. Moreover, we introduce logit scaling for deep Bayesian Monte Carlo methods to calibrate uncertainty after training. Calibrated deep Bayesian models better detect false predictions and out-of-distribution data. Bayesian uncertainty is further leveraged to reduce the economic burden of large data labeling, which is needed to train deep models. We propose BatchPL, a sample acquisition scheme that selects highly informative samples for pseudo-labeling in self- and unsupervised learning scenarios. The approach achieves state-of-the-art performance on both medical and non-medical classification data sets. Many medical imaging problems exceed classification. Therefore, we extended estimation and calibration of predictive uncertainty to deep regression (sigma scaling) and evaluated it on different medical imaging regression tasks. To mitigate the problem of hallucinations in deep generative models, we provide a Bayesian approach to deep image prior (MCDIP), which is not affected by hallucinations as the model only ever has access to one single image

    Deep-learning-based 2.5D flow field estimation for maximum intensity projections of 4D optical coherence tomography

    Get PDF
    In microsurgery, lasers have emerged as precise tools for bone ablation. A challenge is automatic control of laser bone ablation with 4D optical coherence tomography (OCT). OCT as high resolution imaging modality provides volumetric images of tissue and foresees information of bone position and orientation (pose) as well as thickness. However, existing approaches for OCT based laser ablation control rely on external tracking systems or invasively ablated artificial landmarks for tracking the pose of the OCT probe relative to the tissue. This can be superseded by estimating the scene flow caused by relative movement between OCT-based laser ablation system and patient. Therefore, this paper deals with 2.5D scene flow estimation of volumetric OCT images for application in laser ablation. We present a semi-supervised convolutional neural network based tracking scheme for subsequent 3D OCT volumes and apply it to a realistic semi-synthetic data set of ex vivo human temporal bone specimen. The scene flow is estimated in a two-stage approach. In the first stage, 2D lateral scene flow is computed on census-transformed en-face arguments-of-maximum intensity projections. Subsequent to this, the projections are warped by predicted lateral flow and 1D depth flow is estimated. The neural network is trained semi-supervised by combining error to ground truth and the reconstruction error of warped images with assumptions of spatial flow smoothness. Quantitative evaluation reveals a mean endpoint error of (4.7 ± 3.5) voxel or (27.5 ± 20.5) μm for scene flow estimation caused by simulated relative movement between the OCT probe and bone. The scene flow estimation for 4D OCT enables its use for markerless tracking of mastoid bone structures for image guidance in general, and automated laser ablation control. © 2019 SPIE

    Semantic denoising autoencoders for retinal optical coherence tomography

    Get PDF
    Noise in speckle-prone optical coherence tomography tends to obfuscate important details necessary for medical diagnosis. In this paper, a denoising approach that preserves disease characteristics on retinal optical coherence tomography images in ophthalmology is presented. We propose semantic denoising autoencoders, which combine a convolutional denoising autoencoder with a priorly trained ResNet image classifier as regularizer during training. This promotes the perceptibility of delicate details in the denoised images that are important for diagnosis and filters out only informationless background noise. With our approach, higher peak signal-to-noise ratios with PSNR = 31.0 dB and higher classification performance of F1 = 0.92 can be achieved for denoised images compared to state-of-the-art denoising. It is shown that semantically regularized autoencoders are capable of denoising retinal OCT images without blurring details of diseases
    corecore