940 research outputs found

    Uncertainty-aware remaining useful life prediction for predictive maintenance using deep learning

    Get PDF
    Reliably predicting Remaining Useful Life (RUL) is crucial for reducing asset maintenance costs. Deep learning emerges as a powerful data-driven method capable of predicting RUL based on historical operating data. However, standard deep learning tools typically do not account for the uncertainty inherent in prediction tasks. This paper presents an uncertainty-aware approach that predicts not only the RUL but also the associated confidence interval, capturing both aleatoric and epistemic uncertainty. The proposed approach is evaluated on publicly available datasets of aircraft turbofan engines, showing its ability to estimate accurate RUL and well-calibrated uncertainties that are robust to out-of-distribution data

    Reliable Multimodal Trajectory Prediction via Error Aligned Uncertainty Optimization

    Full text link
    Reliable uncertainty quantification in deep neural networks is very crucial in safety-critical applications such as automated driving for trustworthy and informed decision-making. Assessing the quality of uncertainty estimates is challenging as ground truth for uncertainty estimates is not available. Ideally, in a well-calibrated model, uncertainty estimates should perfectly correlate with model error. We propose a novel error aligned uncertainty optimization method and introduce a trainable loss function to guide the models to yield good quality uncertainty estimates aligning with the model error. Our approach targets continuous structured prediction and regression tasks, and is evaluated on multiple datasets including a large-scale vehicle motion prediction task involving real-world distributional shifts. We demonstrate that our method improves average displacement error by 1.69% and 4.69%, and the uncertainty correlation with model error by 17.22% and 19.13% as quantified by Pearson correlation coefficient on two state-of-the-art baselines.Comment: Accepted to ECCV 2022 workshop - Safe Artificial Intelligence for Automated Drivin

    Using mixup as a regularizer can surprisingly improve accuracy and out-of-distribution robustness

    Get PDF
    We show that the effectiveness of the well celebrated Mixup can be further improved if instead of using it as the sole learning objective, it is utilized as an additional regularizer to the standard cross-entropy loss. This simple change not only improves accuracy but also significantly improves the quality of the predictive uncertainty estimation of Mixup in most cases under various forms of covariate shifts and out-of-distribution detection experiments. In fact, we observe that Mixup otherwise yields much degraded performance on detecting out-of-distribution samples possibly, as we show empirically, due to its tendency to learn models exhibiting high-entropy throughout; making it difficult to differentiate in-distribution samples from out-of-distribution ones. To show the efficacy of our approach (RegMixup), we provide thorough analyses and experiments on vision datasets (ImageNet & CIFAR-10/100) and compare it with a suite of recent approaches for reliable uncertainty estimation

    U-CE: Uncertainty-aware Cross-Entropy for Semantic Segmentation

    Get PDF
    Deep neural networks have shown exceptional performance in various tasks, but their lack of robustness, reliability, and tendency to be overconfident pose challenges for their deployment in safety-critical applications like autonomous driving. In this regard, quantifying the uncertainty inherent to a model\u27s prediction is a promising endeavour to address these shortcomings. In this work, we present a novel Uncertainty-aware Cross-Entropy loss (U-CE) that incorporates dynamic predictive uncertainties into the training process by pixel-wise weighting of the well-known cross-entropy loss (CE). Through extensive experimentation, we demonstrate the superiority of U-CE over regular CE training on two benchmark datasets, Cityscapes and ACDC, using two common backbone architectures, ResNet-18 and ResNet-101. With U-CE, we manage to train models that not only improve their segmentation performance but also provide meaningful uncertainties after training. Consequently, we contribute to the development of more robust and reliable segmentation models, ultimately advancing the state-of-the-art in safety-critical applications and beyond
    corecore