Data-driven tools are increasingly used to make consequential decisions. They
have begun to advise employers on which job applicants to interview, judges on
which defendants to grant bail, lenders on which homeowners to give loans, and
more. In such settings, different data-driven rules result in different
decisions. The problem is: to every data-driven rule, there are exceptions.
While a data-driven rule may be appropriate for some, it may not be appropriate
for all. As data-driven decisions become more common, there are cases in which
it becomes necessary to protect the individuals who, through no fault of their
own, are the data-driven exceptions. At the same time, it is impossible to
scrutinize every one of the increasing number of data-driven decisions, begging
the question: When and how should data-driven exceptions be protected?
In this piece, we argue that individuals have the right to be an exception to
a data-driven rule. That is, the presumption should not be that a data-driven
rule--even one with high accuracy--is suitable for an arbitrary
decision-subject of interest. Rather, a decision-maker should apply the rule
only if they have exercised due care and due diligence (relative to the risk of
harm) in excluding the possibility that the decision-subject is an exception to
the data-driven rule. In some cases, the risk of harm may be so low that only
cursory consideration is required. Although applying due care and due diligence
is meaningful in human-driven decision contexts, it is unclear what it means
for a data-driven rule to do so. We propose that determining whether a
data-driven rule is suitable for a given decision-subject requires the
consideration of three factors: individualization, uncertainty, and harm. We
unpack this right in detail, providing a framework for assessing data-driven
rules and describing what it would mean to invoke the right in practice.Comment: 22 pages, 0 figure