It is curious that AI increasingly outperforms human decision makers, yet
much of the public distrusts AI to make decisions affecting their lives. In
this paper we explore a novel theory that may explain one reason for this. We
propose that public distrust of AI is a moral consequence of designing systems
that prioritize reduction of costs of false positives over less tangible costs
of false negatives. We show that such systems, which we characterize as
'distrustful', are more likely to miscategorize trustworthy individuals, with
cascading consequences to both those individuals and the overall human-AI trust
relationship. Ultimately, we argue that public distrust of AI stems from
well-founded concern about the potential of being miscategorized. We propose
that restoring public trust in AI will require that systems are designed to
embody a stance of 'humble trust', whereby the moral costs of the misplaced
distrust associated with false negatives is weighted appropriately during
development and use