Equivalent Error Bars For Neural Network Classifiers Trained By Bayesian Inference

Abstract

The topic of this paper is the problem of outlier detection for neural networks trained by Bayesian inference. I will show that marginalization is not a good method to get moderated probabilities for classes in outlying regions. The reason why marginalization fails to indicate outliers is analysed and an alternative measure, that is a more reliable indicator for outliers, is proposed. A simple artificial classification problem is used to visualize the differences. Finally both methods are used to classify a real world problem, where outlier detection is mandatory. 1 Introduction Neural networks are often used in safety-critical applications for regression or classification purpose. Since neural networks are unable to extrapolate into regions not covered by the training data (see [6]), one should not use their predictions in such regions. Consequently methods for outlier detection got a lot of attraction. Outliers may be detected by assigning a confidence measure to network decisions. ..

    Similar works

    Full text

    thumbnail-image

    Available Versions