90,892 research outputs found

    Uncertainty quantification methods for neural networks pattern recognition

    Get PDF
    On-line monitoring techniques have attracted increasing attention as a promising strategy for improving safety, maintaining availability and reducing the cost of operation and maintenance. In particular, pattern recognition tools such as artificial neural networks are today largely adopted for sensor validation, plant component monitoring, system control, and fault-diagnostics based on the data acquired during operation. However, classic artificial neural networks do not provide an error context for the model response, whose robustness remains thus difficult to estimate. Indeed, experimental data generally exhibit a time/space-varying behaviour and are hence characterized by an intrinsic level of uncertainty that unavoidably affects the performance of the tools adopted and undermines the accuracy of the analysis. For this reason, the propagation of the uncertainty and the quantification of the so called margins of uncertainty in output are crucial in making risk-informed decision. The current study presents a comparison between two different approaches for the quantification of uncertainty in artificial neural networks. The first technique presented is based on the error estimation by a series association scheme, the second approach couples Bayesian model selection technique and model averaging into a unified framework. The efficiency of these two approaches are analysed in terms of their computational cost and predictive performance, through their application to a nuclear power plant fault diagnosis system

    Outlier Detection In Bayesian Neural Networks

    Get PDF
    Exploring different ways of describing uncertainty in neural networks is of great interest. Artificial intelligence models can be used with greater confidence by having solid methods for identifying and quantifying uncertainty. This is especially important in high-risk areas such as medical applications, autonomous vehicles, and financial systems. This thesis explores how to detect classification outliers in Bayesian Neural Networks. A few methods exist for quantifying uncertainty in Bayesian Neural Networks, such as computing the Entropy of the prediction vector. Is there a more accurate and broad way of detecting classification outliers in Bayesian Neural Networks? If a sample is detected as an outlier, is there a way of separating between different types of outliers? We try to answer these questions by using the pre-activation neuron values of a Bayesian Neural Network. We compare, in total, three different methods using simulated data, the Breast Cancer Wisconsin dataset and the MNIST dataset. The first method uses the well-researched Predictive Entropy, which will act as a baseline method. The second method uses the pre-activation neuron values in the output layer of a Bayesian Neural Network; this is done by comparing the pre-activation neuron value from a given data sample with the pre-activation neuron values from the training data. Lastly, the third method is a combination of the first two methods. The results show that the performance might depend on the dataset type. The proposed method outperforms the baseline method on the simulated data. When using the Breast Cancer Wisconsin dataset, we see that the proposed method is significantly better than the baseline. Interestingly, we observe that with the MNIST dataset, the baseline model outperforms the proposed method in most scenarios. Common for all three datasets is that the combination of the two methods performs approximately as well as the best of the two

    Development of generalized feed forward network for predicting annual flood (depth) of a tropical river

    Get PDF
    The modeling of rainfall-runoff relationship in a watershed is very important in designing hydraulic structures, controlling flood and managing storm water. Artificial Neural Networks (ANNs) are known as having the ability to model nonlinear mechanisms. This study aimed at developing a Generalized Feed Forward (GFF) network model for predicting annual flood (depth) of Johor River in Peninsular Malaysia. In order to avoid over training, cross-validation technique was performed for optimizing the model. In addition, predictive uncertainty index was used to protect of over parameterization. The governing training algorithm was back propagation with momentum term and tangent hyperbolic types was used as transfer function for hidden and output layers. The results showed that the optimum architecture was derived by linear tangent hyperbolic transfer function for both hidden and output layers. The values of Nash and Sutcliffe (NS) and Root mean square error (RMSE) obtained 0.98 and 5.92 for the test period. Cross validation evaluation showed 9 process elements is adequate in hidden layer for optimum generalization by considering the predictive uncertainty index obtained (0.14) for test period which is acceptable

    Uncertainty-Estimation with Normalized Logits for Out-of-Distribution Detection

    Full text link
    Out-of-distribution (OOD) detection is critical for preventing deep learning models from making incorrect predictions to ensure the safety of artificial intelligence systems. Especially in safety-critical applications such as medical diagnosis and autonomous driving, the cost of incorrect decisions is usually unbearable. However, neural networks often suffer from the overconfidence issue, making high confidence for OOD data which are never seen during training process and may be irrelevant to training data, namely in-distribution (ID) data. Determining the reliability of the prediction is still a difficult and challenging task. In this work, we propose Uncertainty-Estimation with Normalized Logits (UE-NL), a robust learning method for OOD detection, which has three main benefits. (1) Neural networks with UE-NL treat every ID sample equally by predicting the uncertainty score of input data and the uncertainty is added into softmax function to adjust the learning strength of easy and hard samples during training phase, making the model learn robustly and accurately. (2) UE-NL enforces a constant vector norm on the logits to decouple the effect of the increasing output norm from optimization process, which causes the overconfidence issue to some extent. (3) UE-NL provides a new metric, the magnitude of uncertainty score, to detect OOD data. Experiments demonstrate that UE-NL achieves top performance on common OOD benchmarks and is more robust to noisy ID data that may be misjudged as OOD data by other methods.Comment: 7 pages, 1 figure, 7 tables, preprin
    • …
    corecore