Algorithms, from simple automation to machine learning, have been introduced
into judicial contexts to ostensibly increase the consistency and efficiency of
legal decision making. In this paper, we describe four types of inconsistencies
introduced by risk prediction algorithms. These inconsistencies threaten to
violate the principle of treating similar cases similarly and often arise from
the need to operationalize legal concepts and human behavior into specific
measures that enable the building and evaluation of predictive algorithms.
These inconsistencies, however, are likely to be hidden from their end-users:
judges, parole officers, lawyers, and other decision-makers. We describe the
inconsistencies, their sources, and propose various possible indicators and
solutions. We also consider the issue of inconsistencies due to the use of
algorithms in light of current trends towards more autonomous algorithms and
less human-understandable behavioral big data. We conclude by discussing judges
and lawyers' duties of technological ("algorithmic") competence and call for
greater alignment between the evaluation of predictive algorithms and
corresponding judicial goals