1,428 research outputs found
Explaining Explanations in AI
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
Neural networks are among the most accurate supervised learning methods in
use today, but their opacity makes them difficult to trust in critical
applications, especially when conditions in training differ from those in test.
Recent work on explanations for black-box models has produced tools (e.g. LIME)
to show the implicit rules behind predictions, which can help us identify when
models are right for the wrong reasons. However, these methods do not scale to
explaining entire datasets and cannot correct the problems they reveal. We
introduce a method for efficiently explaining and regularizing differentiable
models by examining and selectively penalizing their input gradients, which
provide a normal to the decision boundary. We apply these penalties both based
on expert annotation and in an unsupervised fashion that encourages diverse
models with qualitatively different decision boundaries for the same
classification problem. On multiple datasets, we show our approach generates
faithful explanations and models that generalize much better when conditions
differ between training and test
Unfooling Perturbation-Based Post Hoc Explainers
Monumental advancements in artificial intelligence (AI) have lured the
interest of doctors, lenders, judges, and other professionals. While these
high-stakes decision-makers are optimistic about the technology, those familiar
with AI systems are wary about the lack of transparency of its decision-making
processes. Perturbation-based post hoc explainers offer a model agnostic means
of interpreting these systems while only requiring query-level access. However,
recent work demonstrates that these explainers can be fooled adversarially.
This discovery has adverse implications for auditors, regulators, and other
sentinels. With this in mind, several natural questions arise - how can we
audit these black box systems? And how can we ascertain that the auditee is
complying with the audit in good faith? In this work, we rigorously formalize
this problem and devise a defense against adversarial attacks on
perturbation-based explainers. We propose algorithms for the detection
(CAD-Detect) and defense (CAD-Defend) of these attacks, which are aided by our
novel conditional anomaly detection approach, KNN-CAD. We demonstrate that our
approach successfully detects whether a black box system adversarially conceals
its decision-making process and mitigates the adversarial attack on real-world
data for the prevalent explainers, LIME and SHAP.Comment: Accepted to AAAI-23. 9 pages (not including references and
supplemental
Recommended from our members
Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems
This paper introduces reviewability as a framework for improving the
accountability of automated and algorithmic decision-making (ADM) involving
machine learning. We draw on an understanding of ADM as a socio-technical
process involving both human and technical elements, beginning before a
decision is made and extending beyond the decision itself. While explanations
and other model-centric mechanisms may assist some accountability concerns,
they often provide insufficient information of these broader ADM processes for
regulatory oversight and assessments of legal compliance. Reviewability
involves breaking down the ADM process into technical and organisational
elements to provide a systematic framework for determining the contextually
appropriate record-keeping mechanisms to facilitate meaningful review - both of
individual decisions and of the process as a whole. We argue that a
reviewability framework, drawing on administrative law's approach to reviewing
human decision-making, offers a practical way forward towards more a more
holistic and legally-relevant form of accountability for ADM
- …