When determining which machine learning model best performs some high impact
risk assessment task, practitioners commonly use the Area under the Curve (AUC)
to defend and validate their model choices. In this paper, we argue that the
current use and understanding of AUC as a model performance metric
misunderstands the way the metric was intended to be used. To this end, we
characterize the misuse of AUC and illustrate how this misuse negatively
manifests in the real world across several risk assessment domains. We locate
this disconnect in the way the original interpretation of AUC has shifted over
time to the point where issues pertaining to decision thresholds, class
balance, statistical uncertainty, and protected groups remain unaddressed by
AUC-based model comparisons, and where model choices that should be the purview
of policymakers are hidden behind the veil of mathematical rigor. We conclude
that current model validation practices involving AUC are not robust, and often
invalid