516 research outputs found
LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees
Systems based on artificial intelligence and machine learning models should
be transparent, in the sense of being capable of explaining their decisions to
gain humans' approval and trust. While there are a number of explainability
techniques that can be used to this end, many of them are only capable of
outputting a single one-size-fits-all explanation that simply cannot address
all of the explainees' diverse needs. In this work we introduce a
model-agnostic and post-hoc local explainability technique for black-box
predictions called LIMEtree, which employs surrogate multi-output regression
trees. We validate our algorithm on a deep neural network trained for object
detection in images and compare it against Local Interpretable Model-agnostic
Explanations (LIME). Our method comes with local fidelity guarantees and can
produce a range of diverse explanation types, including contrastive and
counterfactual explanations praised in the literature. Some of these
explanations can be interactively personalised to create bespoke, meaningful
and actionable insights into the model's behaviour. While other methods may
give an illusion of customisability by wrapping, otherwise static, explanations
in an interactive interface, our explanations are truly interactive, in the
sense of allowing the user to "interrogate" a black-box model. LIMEtree can
therefore produce consistent explanations on which an interactive exploratory
process can be built
Tunable transport with broken space-time symmetries
Transport properties of particles and waves in spatially periodic structures
that are driven by external time-dependent forces manifestly depend on the
space-time symmetries of the corresponding equations of motion. A systematic
analysis of these symmetries uncovers the conditions necessary for obtaining
directed transport. In this work we give a unified introduction into the
symmetry analysis and demonstrate its action on the motion in one-dimensional
periodic, both in time and space, potentials. We further generalize the
analysis to quasi-periodic drivings, higher space dimensions, and quantum
dynamics. Recent experimental results on the transport of cold and ultracold
atomic ensembles in ac-driven optical potentials are reviewed as illustrations
of theoretical considerations.Comment: Phys. Rep., in pres
Non-Parametric Calibration of Probabilistic Regression
The task of calibration is to retrospectively adjust the outputs from a
machine learning model to provide better probability estimates on the target
variable. While calibration has been investigated thoroughly in classification,
it has not yet been well-established for regression tasks. This paper considers
the problem of calibrating a probabilistic regression model to improve the
estimated probability densities over the real-valued targets. We propose to
calibrate a regression model through the cumulative probability density, which
can be derived from calibrating a multi-class classifier. We provide three
non-parametric approaches to solve the problem, two of which provide empirical
estimates and the third providing smooth density estimates. The proposed
approaches are experimentally evaluated to show their ability to improve the
performance of regression models on the predictive likelihood
Recommended from our members
Material and Process Parameters that Affect Accuracy in Stereolithography
Experimental real time linear shrinkage rate measurements simulating stereolithography
are used in an analysis of shrinkage during line drawing in stereolithography. While the amount of
shrinkage depends on the polymerization kinetics, shrinkage kinetics and overall degree of cure, it
also depends on the length of time to draw a line of plastic. A line drawn slowly will exhibit less
apparent shrinkage than one drawn very quickly because much of the shrinkage is compensated
for as the line is drawn. The data also indicates that a typical stereolithography resin in the green
state may shrink to only 65% of its maximum, thus retaining considerable potential for shrinkage
during post-cure. This infonnation can be used to predict the amount of shrinkage to be expected
under certain exposure conditions and to fonnulate overall strategies to reduce shrinkage and
subsequent warpage that causes shape distortion.Mechanical Engineerin
Threshold Choice Methods: the Missing Link
Many performance metrics have been introduced for the evaluation of
classification performance, with different origins and niches of application:
accuracy, macro-accuracy, area under the ROC curve, the ROC convex hull, the
absolute error, and the Brier score (with its decomposition into refinement and
calibration). One way of understanding the relation among some of these metrics
is the use of variable operating conditions (either in the form of
misclassification costs or class proportions). Thus, a metric may correspond to
some expected loss over a range of operating conditions. One dimension for the
analysis has been precisely the distribution we take for this range of
operating conditions, leading to some important connections in the area of
proper scoring rules. However, we show that there is another dimension which
has not received attention in the analysis of performance metrics. This new
dimension is given by the decision rule, which is typically implemented as a
threshold choice method when using scoring models. In this paper, we explore
many old and new threshold choice methods: fixed, score-uniform, score-driven,
rate-driven and optimal, among others. By calculating the loss of these methods
for a uniform range of operating conditions we get the 0-1 loss, the absolute
error, the Brier score (mean squared error), the AUC and the refinement loss
respectively. This provides a comprehensive view of performance metrics as well
as a systematic approach to loss minimisation, namely: take a model, apply
several threshold choice methods consistent with the information which is (and
will be) available about the operating condition, and compare their expected
losses. In order to assist in this procedure we also derive several connections
between the aforementioned performance metrics, and we highlight the role of
calibration in choosing the threshold choice method
Computational support for academic peer review:a perspective from artificial intelligence
New tools tackle an age-old practice.</jats:p
- …