2,324 research outputs found
Actionable Recourse in Linear Classification
Machine learning models are increasingly used to automate decisions that
affect humans - deciding who should receive a loan, a job interview, or a
social service. In such applications, a person should have the ability to
change the decision of a model. When a person is denied a loan by a credit
score, for example, they should be able to alter its input variables in a way
that guarantees approval. Otherwise, they will be denied the loan as long as
the model is deployed. More importantly, they will lack the ability to
influence a decision that affects their livelihood.
In this paper, we frame these issues in terms of recourse, which we define as
the ability of a person to change the decision of a model by altering
actionable input variables (e.g., income vs. age or marital status). We present
integer programming tools to ensure recourse in linear classification problems
without interfering in model development. We demonstrate how our tools can
inform stakeholders through experiments on credit scoring problems. Our results
show that recourse can be significantly affected by standard practices in model
development, and motivate the need to evaluate recourse in practice.Comment: Extended version. ACM Conference on Fairness, Accountability and
Transparency [FAT2019
On the Trade-Off between Actionable Explanations and the Right to be Forgotten
As machine learning (ML) models are increasingly being deployed in
high-stakes applications, policymakers have suggested tighter data protection
regulations (e.g., GDPR, CCPA). One key principle is the "right to be
forgotten" which gives users the right to have their data deleted. Another key
principle is the right to an actionable explanation, also known as algorithmic
recourse, allowing users to reverse unfavorable decisions. To date, it is
unknown whether these two principles can be operationalized simultaneously.
Therefore, we introduce and study the problem of recourse invalidation in the
context of data deletion requests. More specifically, we theoretically and
empirically analyze the behavior of popular state-of-the-art algorithms and
demonstrate that the recourses generated by these algorithms are likely to be
invalidated if a small number of data deletion requests (e.g., 1 or 2) warrant
updates of the predictive model. For the setting of linear models and
overparameterized neural networks -- studied through the lens of neural tangent
kernels (NTKs) -- we suggest a framework to identify a minimal subset of
critical training points which, when removed, maximize the fraction of
invalidated recourses. Using our framework, we empirically show that the
removal of as little as 2 data instances from the training set can invalidate
up to 95 percent of all recourses output by popular state-of-the-art
algorithms. Thus, our work raises fundamental questions about the compatibility
of "the right to an actionable explanation" in the context of the "right to be
forgotten" while also providing constructive insights on the determining
factors of recourse robustness
Leveraging Contextual Counterfactuals Toward Belief Calibration
Beliefs and values are increasingly being incorporated into our AI systems
through alignment processes, such as carefully curating data collection
principles or regularizing the loss function used for training. However, the
meta-alignment problem is that these human beliefs are diverse and not aligned
across populations; furthermore, the implicit strength of each belief may not
be well calibrated even among humans, especially when trying to generalize
across contexts. Specifically, in high regret situations, we observe that
contextual counterfactuals and recourse costs are particularly important in
updating a decision maker's beliefs and the strengths to which such beliefs are
held. Therefore, we argue that including counterfactuals is key to an accurate
calibration of beliefs during alignment. To do this, we first segment belief
diversity into two categories: subjectivity (across individuals within a
population) and epistemic uncertainty (within an individual across different
contexts). By leveraging our notion of epistemic uncertainty, we introduce `the
belief calibration cycle' framework to more holistically calibrate this
diversity of beliefs with context-driven counterfactual reasoning by using a
multi-objective optimization. We empirically apply our framework for finding a
Pareto frontier of clustered optimal belief strengths that generalize across
different contexts, demonstrating its efficacy on a toy dataset for credit
decisions.Comment: ICML (International Conference on Machine Learning) Workshop on
Counterfactuals in Minds and Machines, 202
Prediction without Preclusion: Recourse Verification with Reachable Sets
Machine learning models are often used to decide who will receive a loan, a
job interview, or a public benefit. Standard techniques to build these models
use features about people but overlook their actionability. In turn, models can
assign predictions that are fixed, meaning that consumers who are denied loans,
interviews, or benefits may be permanently locked out from access to credit,
employment, or assistance. In this work, we introduce a formal testing
procedure to flag models that assign fixed predictions that we call recourse
verification. We develop machinery to reliably determine if a given model can
provide recourse to its decision subjects from a set of user-specified
actionability constraints. We demonstrate how our tools can ensure recourse and
adversarial robustness in real-world datasets and use them to study the
infeasibility of recourse in real-world lending datasets. Our results highlight
how models can inadvertently assign fixed predictions that permanently bar
access, and we provide tools to design algorithms that account for
actionability when developing models
- …