18 research outputs found

    The Unreasonable Effectiveness of Deep Evidential Regression

    Full text link
    There is a significant need for principled uncertainty reasoning in machine learning systems as they are increasingly deployed in safety-critical domains. A new approach with uncertainty-aware regression-based neural networks (NNs), based on learning evidential distributions for aleatoric and epistemic uncertainties, shows promise over traditional deterministic methods and typical Bayesian NNs, notably with the capabilities to disentangle aleatoric and epistemic uncertainties. Despite some empirical success of Deep Evidential Regression (DER), there are important gaps in the mathematical foundation that raise the question of why the proposed technique seemingly works. We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a heuristic rather than an exact uncertainty quantification. We go on to propose corrections and redefinitions of how aleatoric and epistemic uncertainties should be extracted from NNs.Comment: 11 pages, 25 figure

    Out-of-distribution detection in satellite image classification

    Get PDF
    In satellite image analysis, distributional mismatch between the training and test data may arise due to several reasons, including unseen classes in the test data and differences in the geographic area. Deep learning based models may behave in unexpected manner when subjected to test data that has such distributional shifts from the training data, also called out-of-distribution (OOD) examples. Predictive uncertainly analysis is an emerging research topic which has not been explored much in context of satellite image analysis. Towards this, we adopt a Dirichlet Prior Network based model to quantify distributional uncertainty of deep learning models for remote sensing. The approach seeks to maximize the representation gap between the in-domain and OOD examples for a better identification of unknown examples at test time. Experimental results on three exemplary test scenarios show the efficacy of the model in satellite image analysis

    Towards Out-of-Distribution Detection for Remote Sensing

    Get PDF
    In remote sensing, distributional mismatch between the training and test data may arise due to several reasons, including unseen classes in the test data, differences in the geographic area, and multi-sensor differences. Deep learning based models may behave in unexpected manners when subjected to test data that has such distributional shifts from the training data, also called out-of-distribution (OOD) examples. Vulnerability to OOD data severely reduces the reliability of deep learning based models. In this work, we address this issue by proposing a model to quantify distributional uncertainty of deep learning based remote sensing models. In particular, we adopt a Dirichlet Prior Network for remote sensing data. The approach seeks to maximize the representation gap between the in-domain and OOD examples for a better identification of unknown examples at test time. Experimental results on three exemplary test scenarios show that the proposed model can detect OOD images in remote sensing

    An Advanced Dirichlet Prior Network for Out-of-distribution Detection in Remote Sensing

    Get PDF
    This article introduces a compressive sensing (CS)-based approach for increasing bistatic synthetic aperture radar (SAR) imaging quality in the context of a multiaperture acquisition. The analyzed data were recorded over an opportunistic bistatic setup including a stationary ground-based-receiver opportunistic C-band bistatic SAR differential interferometry (COBIS) and Sentinel-1 C-band transmitter. Since the terrain observation by progressive scans (TOPS) mode is operated, the receiver can record synchronization pulses and echoed signals from the scene during many apertures. Hence, it is possible to improve the azimuth resolution by exploiting the multiaperture data. The recorded data are not contiguous and a naive integration of the chopped azimuth phase history would generate undesired grating lobes. The proposed processing scheme exploits the natural sparsity characterizing the illuminated scene. For azimuth profiles recovery greedy, convex, and nonconvex CS solvers are analyzed. The sparsifying basis/dictionary is constructed using the synthetically generated azimuth chirp derived considering Sentinel-1 orbital parameters and COBIS position. The chirped-based CS performance is further put in contrast with a Fourier-based CS method and an autoregressive model for signal reconstruction in terms of scene extent limitations and phase restoration efficiency. Furthermore, the analysis of different receiver-looking scenarios conducted to the insertion in the processing chain of a direct and an inverse Keystone transform for range cell migration (RCM) correction to cope with squinted geometries. We provide an extensive set of simulated and real-world results that prove the proposed workflow is efficient both in improving the azimuth resolution and in mitigating the sidelobes

    Explaining the Effects of Clouds on Remote Sensing Scene Classification

    Get PDF
    Most of Earth is covered by haze or clouds, impeding the constant monitoring of our planet. Preceding works have documented the detrimental effects of cloud coverage on remote sensing applications and proposed ways to approach this issue. However, up to now, little effort has been spent on understanding how exactly atmospheric disturbances impede the application of modern machine learning methods to Earth observation data. Specifically, we consider the effects of haze and cloud coverage on a scene classification task. We provide a thorough investigation of how classifiers trained on cloud-free data fail once they encounter noisy imagery—a common scenario encountered when deploying pretrained models for remote sensing to real use cases. We show how and why remote sensing scene classification suffers from cloud coverage. Based on a multistage analysis, including explainability approaches applied to the predictions, we work out four different types of effects that clouds have on scene prediction. The contribution of our work is to deepen the understanding of the effects of clouds on common remote sensing applications and consequently guide the development of more robust methods

    Out-of-Distribution Detection in Satellite Image Classification

    Get PDF
    In satellite image analysis, distributional mismatch between the training and test data may arise due to several reasons, including unseen classes in the test data and differences in the geographic area. Deep learning based models may behave in unexpected manner when subjected to test data that has such distributional shifts from the training data, also called out-of-distribution (OOD) examples. Predictive uncertainly analysis is an emerging research topic which has not been explored much in context of satellite image analysis. Towards this, we adopt a Dirichlet Prior Network based model to quantify distributional uncertainty of deep learning models for remote sensing. The approach seeks to maximize the representation gap between the in-domain and OOD examples for a better identification of unknown examples at test time. Experimental results on three exemplary test scenarios show the efficacy of the model in satellite image analysis

    A survey of uncertainty in deep neural networks

    Get PDF
    Over the last decade, neural networks have reached almost every field of science and become a crucial part of various real world applications. Due to the increasing spread, confidence in neural network predictions has become more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over- or under-confidence, i.e. are badly calibrated. To overcome this, many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and various approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. For that, a comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and irreducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks (BNNs), ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for calibrating neural networks, and give an overview of existing baselines and available implementations. Different examples from the wide spectrum of challenges in the fields of medical image analysis, robotics, and earth observation give an idea of the needs and challenges regarding uncertainties in the practical applications of neural networks. Additionally, the practical limitations of uncertainty quantification methods in neural networks for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given

    Uncertainties in neural networks

    No full text
    Content: German Aerospace Center (DLR); Neural Networks; Uncertainties in neural network; Structuring Uncertainty in Stochastic Segmentation Networks; Interpretation as a Factor Model; Rotation on Flow-Probabilities; Factor-Wise Prediction Manipulatio

    ESTIMATING UNCERTAINTY OF DEEP LEARNING MULTI-LABEL CLASSIFICATIONS USING LAPLACE APPROXIMATION

    No full text
    Deep learning methods have become valuable tools in remote sensing for tasks like aerial scene classification or land cover analysis. Dealing with noisy and very varying data, the need for reliable confidence statements becomes apparent. While deep learning models are known to yield overconfident pre- dictions, quantifying the model uncertainty of those classi- fiers can help mitigating that effect. Although uncertainty es- timation methods for multi-class classification have been pub- lished, multi-label classification - the task of labelling data with multiple class labels simultaneously - has hardly been considered yet. In this study, we use multi-label Laplace Ap- proximation to estimate the model uncertainty of deep multi- label classifiers and show how this method can improve cali- bration and out-of-distribution detection in the remote sensing domain
    corecore