4 research outputs found

    Quality assurance for automatically generated contours with additional deep learning

    Get PDF
    Objective: Deploying an automatic segmentation model in practice should require rigorous quality assurance (QA) and continuous monitoring of the model鈥檚 use and performance, particularly in high-stakes scenarios such as healthcare. Currently, however, tools to assist with QA for such models are not available to AI researchers. In this work, we build a deep learning model that estimates the quality of automatically generated contours. Methods: The model was trained to predict the segmentation quality by outputting an estimate of the Dice similarity coefficient given an image contour pair as input. Our dataset contained 60 axial T2-weighted MRI images of prostates with ground truth segmentations along with 80 automatically generated segmentation masks. The model we used was a 3D version of the EfficientDet architecture with a custom regression head. For validation, we used a fivefold cross-validation. To counteract the limitation of the small dataset, we used an extensive data augmentation scheme capable of producing virtually infinite training samples from a single ground truth label mask. In addition, we compared the results against a baseline model that only uses clinical variables for its predictions. Results: Our model achieved a mean absolute error of 0.020 卤 0.026 (2.2% mean percentage error) in estimating the Dice score, with a rank correlation of 0.42. Furthermore, the model managed to correctly identify incorrect segmentations (defined in terms of acceptable/unacceptable) 99.6% of the time. Conclusion: We believe that the trained model can be used alongside automatic segmentation tools to ensure quality and thus allow intervention to prevent undesired segmentation behavior

    Pairing an arbitrary regressor with an artificial neural network estimating aleatoric uncertainty

    No full text
    We suggest a general approach to quantification of different forms of aleatoric uncertainty in regression tasks performed by artificial neural networks. It is based on the simultaneous training of two neural networks with a joint loss function and a specific hyperparameter 位 > 0 that allows for automatically detecting noisy and clean regions in the input space and controlling their relative contribution to the loss and its gradients. After the model has been trained, one of the networks performs predictions and the other quantifies the uncertainty of these predictions by estimating the locally averaged loss of the first one. Unlike in many classical uncertainty quantification methods, we do not assume any a priori knowledge of the ground truth probability distribution, neither do we, in general, maximize the likelihood of a chosen parametric family of distributions. We analyze the learning process and the influence of clean and noisy regions of the input space on the loss surface, depending on 位. In particular, we show that small values of 位 increase the relative contribution of clean regions to the loss and its gradients. This explains why choosing small 位 allows for better predictions compared with neural networks without uncertainty counterparts and those based on classical likelihood maximization. Finally, we demonstrate that one can naturally form ensembles of pairs of our networks and thus capture both aleatoric and epistemic uncertainty and avoid overfitting. 漏 2019 Elsevier B.V
    corecore