The law is often engaged in prediction. In the calculation of tort damages, for example, a judge will consider what the tort victim’s likely future earnings would have been, but for their particular injury. Similarly, when considering injunctive relief, a judge will assess whether the plaintiff is likely to suffer irreparable harm if a preliminary injunction is not granted. And for the purposes of a child custody evaluation, a judge will consider which parent will provide an environment that is in the best interests of the child.
Relative to other areas of law, criminal law is oversaturated with prediction. Almost every decision node in the criminal justice system demands a prediction of individual behavior: does the accused present a flight risk, or a danger to the public (pre-trial detention); is the defendant likely to recidivate (sentencing); and will the defendant successfully reenter society (parole)? Increasingly, these predictions are made by algorithms, many of which display racial bias, and are hidden from public view. Existing scholarship has focused on de-biasing and disclosing algorithmic models, but this Article argues that even a transparent and unbiased algorithm may undermine the epistemic legitimacy of a judicial decision.
Law has historically generated truth claims through discursive and dialogic practices, using shared linguistic tools, in an environment characterized by proximity and reciprocity. In contrast, the truth claims of data science are generated from data processing of such scale and complexity that it is not commensurable with, or reversible to, human reasoning. Data science excludes the individual from the production of knowledge about themselves on the basis that “unmediated” behavioral data (not self-reported or otherwise subject to conscious manipulation by the data subject) offers unrivaled predictive accuracy. Accordingly, data science discounts the first-person view of reality that has traditionally underwritten legal processes of truth-making, such as individual testimony.
As judges turn to algorithms to guide their decision making, knowledge about the legal subject is increasingly algorithmically produced. Statistical predictions about the legal subject displace qualitative knowledge about their intentions, motivations, and moral capabilities. The reasons why a particular defendant might refrain from recidivism, for example, become less important than the statistical features they share with historical recidivists. This displacement of individual knowledge with algorithmic predictions diminishes the participation of the legal subject in the epistemic processes that determine their fundamental liberties. This produces the death of the legal subject, or the emergence of new, algorithmic practices of signification that no longer require the input of the underlying individual