26,794 research outputs found

    Benchmarking Scalable Epistemic Uncertainty Quantification in Organ Segmentation

    Full text link
    Deep learning based methods for automatic organ segmentation have shown promise in aiding diagnosis and treatment planning. However, quantifying and understanding the uncertainty associated with model predictions is crucial in critical clinical applications. While many techniques have been proposed for epistemic or model-based uncertainty estimation, it is unclear which method is preferred in the medical image analysis setting. This paper presents a comprehensive benchmarking study that evaluates epistemic uncertainty quantification methods in organ segmentation in terms of accuracy, uncertainty calibration, and scalability. We provide a comprehensive discussion of the strengths, weaknesses, and out-of-distribution detection capabilities of each method as well as recommendations for future improvements. These findings contribute to the development of reliable and robust models that yield accurate segmentations while effectively quantifying epistemic uncertainty.Comment: Accepted to the UNSURE Workshop held in conjunction with MICCAI 202

    A Deeper Look into Aleatoric and Epistemic Uncertainty Disentanglement

    Get PDF
    Neural networks are ubiquitous in many tasks, but trusting their predictions is an open issue. Uncertainty quantification is required for many applications, and disentangled aleatoric and epistemic uncertainties are best. In this paper, we generalize methods to produce disentangled uncertainties to work with different uncertainty quantification methods, and evaluate their capability to produce disentangled uncertainties. Our results show that: there is an interaction between learning aleatoric and epistemic uncertainty, which is unexpected and violates assumptions on aleatoric uncertainty, some methods like Flipout produce zero epistemic uncertainty, aleatoric uncertainty is unreliable in the out-of-distribution setting, and Ensembles provide overall the best disentangling quality. We also explore the error produced by the number of samples hyper-parameter in the sampling softmax function, recommending N > 100 samples. We expect that our formulation and results help practitioners and researchers choose uncertainty methods and expand the use of disentangled uncertainties, as well as motivate additional research into this topic.Comment: 8 pages, 12 figures, with supplementary. LatinX in CV Workshop @ CVPR 2022 Camera Read

    Concrete Dropout

    Full text link
    Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks. But to obtain well-calibrated uncertainty estimates, a grid-search over the dropout probabilities is necessary - a prohibitive operation with large models, and an impossible one with RL. We propose a new dropout variant which gives improved performance and better calibrated uncertainties. Relying on recent developments in Bayesian deep learning, we use a continuous relaxation of dropout's discrete masks. Together with a principled optimisation objective, this allows for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles. In RL this allows the agent to adapt its uncertainty dynamically as more data is observed. We analyse the proposed variant extensively on a range of tasks, and give insights into common practice in the field where larger dropout probabilities are often used in deeper model layers

    Classical Knowledge for Quantum Security

    Get PDF
    We propose a decision procedure for analysing security of quantum cryptographic protocols, combining a classical algebraic rewrite system for knowledge with an operational semantics for quantum distributed computing. As a test case, we use our procedure to reason about security properties of a recently developed quantum secret sharing protocol that uses graph states. We analyze three different scenarios based on the safety assumptions of the classical and quantum channels and discover the path of an attack in the presence of an adversary. The epistemic analysis that leads to this and similar types of attacks is purely based on our classical notion of knowledge.Comment: extended abstract, 13 page

    Physics-constrained Random Forests for Turbulence Model Uncertainty Estimation

    Full text link
    To achieve virtual certification for industrial design, quantifying the uncertainties in simulation-driven processes is crucial. We discuss a physics-constrained approach to account for epistemic uncertainty of turbulence models. In order to eliminate user input, we incorporate a data-driven machine learning strategy. In addition to it, our study focuses on developing an a priori estimation of prediction confidence when accurate data is scarce.Comment: Workshop on Synergy of Scientific and Machine Learning Modeling, SynS & ML ICM

    Multifidelity Uncertainty Quantification of a Commercial Supersonic Transport

    Get PDF
    The objective of this work was to develop a multifidelity uncertainty quantification approach for efficient analysis of a commercial supersonic transport. An approach based on non-intrusive polynomial chaos was formulated in which a low-fidelity model could be corrected by any number of high-fidelity models. The formulation and methodology also allows for the addition of uncertainty sources not present in the lower fidelity models. To demonstrate the applicability of the multifidelity polynomial chaos approach, two model problems were explored. The first was supersonic airfoil with three levels of modeling fidelity, each capturing an additional level of physics. The second problem was a commercial supersonic transport. This model had three levels of fidelity that included two different modeling approaches and the addition of physics between the fidelity levels. Both problems illustrate the applicability and significant computational savings of the multifidelity polynomial chaos method
    corecore