7 research outputs found
Calibrated Explanations for Regression
Artificial Intelligence (AI) is often an integral part of modern decision
support systems (DSSs). The best-performing predictive models used in AI-based
DSSs lack transparency. Explainable Artificial Intelligence (XAI) aims to
create AI systems that can explain their rationale to human users. Local
explanations in XAI can provide information about the causes of individual
predictions in terms of feature importance. However, a critical drawback of
existing local explanation methods is their inability to quantify the
uncertainty associated with a feature's importance. This paper introduces an
extension of a feature importance explanation method, Calibrated Explanations
(CE), previously only supporting classification, with support for standard
regression and probabilistic regression, i.e., the probability that the target
is above an arbitrary threshold. The extension for regression keeps all the
benefits of CE, such as calibration of the prediction from the underlying model
with confidence intervals, uncertainty quantification of feature importance,
and allows both factual and counterfactual explanations. CE for standard
regression provides fast, reliable, stable, and robust explanations. CE for
probabilistic regression provides an entirely new way of creating probabilistic
explanations from any ordinary regression model and with a dynamic selection of
thresholds. The performance of CE for probabilistic regression regarding
stability and speed is comparable to LIME. The method is model agnostic with
easily understood conditional rules. An implementation in Python is freely
available on GitHub and for installation using pip making the results in this
paper easily replicable.Comment: 30 pages, 11 figures (replaced due to omitted author, which is the
only change made
Conformal Prediction: a Unified Review of Theory and New Challenges
In this work we provide a review of basic ideas and novel developments about
Conformal Prediction -- an innovative distribution-free, non-parametric
forecasting method, based on minimal assumptions -- that is able to yield in a
very straightforward way predictions sets that are valid in a statistical sense
also in in the finite sample case. The in-depth discussion provided in the
paper covers the theoretical underpinnings of Conformal Prediction, and then
proceeds to list the more advanced developments and adaptations of the original
idea.Comment: arXiv admin note: text overlap with arXiv:0706.3188,
arXiv:1604.04173, arXiv:1709.06233, arXiv:1203.5422 by other author
Classifier Calibration: A survey on how to assess and improve predicted class probabilities
This paper provides both an introduction to and a detailed overview of the
principles and practice of classifier calibration. A well-calibrated classifier
correctly quantifies the level of uncertainty or confidence associated with its
instance-wise predictions. This is essential for critical applications, optimal
decision making, cost-sensitive classification, and for some types of context
change. Calibration research has a rich history which predates the birth of
machine learning as an academic field by decades. However, a recent increase in
the interest on calibration has led to new methods and the extension from
binary to the multiclass setting. The space of options and issues to consider
is large, and navigating it requires the right set of concepts and tools. We
provide both introductory material and up-to-date technical details of the main
concepts and methods, including proper scoring rules and other evaluation
metrics, visualisation approaches, a comprehensive account of post-hoc
calibration methods for binary and multiclass classification, and several
advanced topics