11 research outputs found

    Set-Valued Prediction in Multi-Class Classification

    Get PDF
    In cases of uncertainty, a multi-class classifier preferably returns a set of candidate classes instead of predicting a single class label with little guarantee. More precisely, the classifier should strive for an optimal balance between the correctness (the true class is among the candidates) and the precision (the candidates are not too many) of its prediction. We formalize this problem within a general decision-theoretic framework that unifies most of the existing work in this area. In this framework, uncertainty is quantified in terms of conditional class probabilities, and the quality of a predicted set is measured in terms of a utility function. We then address the problem of finding the Bayes-optimal prediction, i.e., the subset of class labels with highest expected utility. For this problem, which is computationally challenging as there are exponentially (in the number of classes) many predictions to choose from, we propose efficient algorithms that can be applied to a broad family of utility functions. Our theoretical results are complemented by experimental studies, in which we analyze the proposed algorithms in terms of predictive accuracy and runtime efficiency

    Predictive inference for system reliability after common-cause component failures

    Get PDF
    Abstract This paper presents nonparametric predictive inference for system reliability following common-cause failures of components. It is assumed that a single failure event may lead to simultaneous failure of multiple components. Data consist of frequencies of such events involving particular numbers of components. These data are used to predict the number of components that will fail at the next failure event. The effect of failure of one or more components on the system reliability is taken into account through the system's survival signature. The predictive performance of the approach, in which uncertainty is quantified using lower and upper probabilities, is analysed with the use of ROC curves. While this approach is presented for a basic scenario of a system consisting of only a single type of components and without consideration of failure behaviour over time, it provides many opportunities for more general modelling and inference, these are briefly discussed together with the related research challenges

    EVALUATING CREDAL CLASSIFIERS BY UTILITY-DISCOUNTED PREDICTIVE ACCURACY

    Get PDF
    ABSTRACT. Predictions made by imprecise-probability models are often indeterminate (that is, set-valued). Measuring the quality of an indeterminate prediction by a single number is important to fairly compare different models, but a principled approach to this problem is currently missing. In this paper we derive, from a set of assumptions, a metric to evaluate the predictions of credal classifiers. These are supervised learning models that issue set-valued predictions. The metric turns out to be made of an objective component, and another that is related to the decision-maker’s degree of risk aversion to the variability of predictions. We discuss when the measure can be rendered independent of such a degree, and provide insights as to how the comparison of classifiers based on the new measure changes with the number of predictions to be made. Finally, we make extensive empirical tests of credal, as well as precise, classifiers by using the new metric. This shows the practical usefulness of the metric, while yielding a first insightful and extensive comparison of credal classifiers. 1
    corecore