13139 research outputs found
Sort by
The Overstated Cost of AI Fairness in Criminal Justice
The dominant critique of algorithmic fairness in AI decision-making, particularly in criminal justice, is that increasing fairness reduces the accuracy of predictions, thereby imposing a cost on society. This Article challenges that assumption by empirically analyzing the COMPAS algorithm, a widely used and widely discussed risk assessment tool in the U.S. criminal justice system.
This Article makes two contributions. First, it demonstrates that widely used AI models do more than replicate existing biases—they exacerbate them. Using causal inference methods, we show that racial bias is not only present in the COMPAS dataset but also worsened by AI models such as COMPAS. This finding has implications for legal scholarship and policymaking, as it (a) challenges the assumption that AI can offer an objective or neutral improvement over human decision-making and (b) provides counterevidence to the idea that AI merely mirrors preexisting human biases.
Second, this Article reframes the debate over the cost of fairness in algorithmic decision-making for criminal justice. It shows that applying fairness constraints does not necessarily lead to a cost in terms of loss in predictive accuracy regarding recidivism. AI systems operationalize concepts such as risk by making implicit and often flawed normative choices about what to predict and how to predict it. The claim that fair AI models decrease accuracy assumes that the model’s prediction is an optimal baseline. Fairness constraints, in fact, can correct distortions introduced by biased outcome variables—which magnify systemic racial disparities in rearrest data rather than reflect actual risk. In some cases, interventions can introduce algorithmic fairness without imposing the cost often presumed in policy discussions.
These findings are consequential beyond criminal justice. Similar dynamics exist in AI-driven decision-making in lending, hiring, and housing, where biased outcome variables reinforce systemic inequalities beyond the choices of proxies. By providing empirical evidence that fairness constraints can improve rather than undermine decision-making, this Article advances the conversation on how law and policy should approach AI bias, particularly when algorithmic decisions affect fundamental rights
Guggenheim, MacArthur Fellow to address Class of 2025
Reginald Dwayne Betts, an internationally recognized poet, legal scholar, educator, and prison reform advocate, will serve as the 2025 Commencement speaker for the graduating classes of the Indiana University Maurer School of Law.
The Law School will recognize its graduating students on Saturday, May 10, from 4 to 6 p.m. in the Indiana University Auditorium.
“Dwayne Betts has a remarkable story that resonates with audiences around the world,” said Dean Christiana Ochoa. “His journey from incarceration to inspiration is an example of how we can all make positive changes in our lives and make an impact on others. I’m grateful to Aristotle Jones for introducing us to Dwayne and look forward to recognizing Aristotle and the rest of our graduates in May.
Dark Patterns as Disloyal Design
Lawmakers have started to regulate “dark patterns,” understood to be design practices meant to influence technology users’ decisions through manipulative or deceptive means. Most agree that dark patterns are undesirable, but open questions remain as to which design choices should be subjected to scrutiny, much less the best way to regulate them.
In this Article, we propose adapting the concept of dark patterns to better fit legal frameworks. Critics allege that the legal conceptualizations of dark patterns are overbroad, impractical, and counterproductive. We argue that law and policy conceptualizations of dark patterns suffer from three deficiencies: First, dark patterns lack a clear value anchor for cases to build upon. Second, legal definitions of dark patterns overfocus on individuals and atomistic choices, ignoring de minimis aggregate harms and the societal implications of manipulation at scale. Finally, the law has struggled to articulate workable legal thresholds for wrongful dark patterns. To better regulate the designs called dark patterns, lawmakers need a better conceptual framing that bridges the gap between design theory and the law’s need for clarity, flexibility, and compatibility with existing frameworks.
We argue that wrongful self-dealing is at the heart of what most consider to be “dark” about certain design patterns. Taking advantage of design affordances to the detriment of a vulnerable party is disloyal. To that end, we propose disloyal design as a regulatory framing for dark patterns. In drawing from established frameworks that prohibit wrongful self-dealing, we hope to provide more clarity and consistency for regulators, industry, and users. Disloyal design will fit better into legal frameworks and better rally public support for ensuring that the most popular tools in society are built to prioritize human values
The Case for Contingent Regulatory Sunsets
Cost-benefit analysis is at the core of regulatory impact analysis for every proposed rule or regulation and is designed to be a structural constraint on the administrative state. The challenge is ex ante cost-benefit analysis necessarily rests on many assumptions, and much more information is available about a regulation’s impact after it has been implemented. But ex post cost-benefit analysis is ad hoc and infrequent in spite of efforts by numerous presidential administrations to promote regulatory lookbacks.
I propose institutionalizing “contingent regulatory sunsets” to ensure that rules and regulations have the positive impact in practice that administrative agencies intended. I show how Congress can consider a spectrum of approaches for independent actors to conduct regulatory lookbacks of economically significant regulations at regular intervals. I explore the merits for centralized legislative branch review (Government Accountability Office), strengthened executive branch review (Office of Information and Regulatory Affairs), agencies themselves, and the creation of a new “Regulatory Lookback” agency to take on this role. While each approach has virtues, I conclude that each agency’s Office of Inspector General (OIG) may be best positioned to build on existing oversight functions to provide periodic review of the impact of regulations.
If the OIG’s cost-benefit analysis shows that the regulation’s real-world impact is actually negative, then the agency that issued the rule would face the burden of rescinding, modifying, or providing updated justifications and cost-benefit analysis. The goal is not to cripple the workings of the vast administrative state, but rather to provide systematic, internal accountability. The hope is that overly optimistic assumptions about costs and benefits will be tempered by routine ex post scrutiny and the sunlight of empirical reality. I then lay out quantitative and qualitative limiting principles to show how periodic cost-benefit review of economically significant regulations could be economically and politically feasible. I conclude by proposing a pilot study to measure the efficacy of OIG ex post review of regulations to provide evidence to justify expanding this initiative on an executive branch-wide basis
Moving Slow and Fixing Things
Silicon Valley, and the U.S. tech sector more broadly, have changed the world in part by embracing a “move fast and break things” mentality popularized by Mark Zuckerberg. While it is true that the tech sector has attempted to break with such a reactive and flippant response to security concerns, including at Microsoft itself through its Security Development Lifecycle, cyberattacks continue at an alarming rate. As a result, there are growing calls from regulators around the world to change the risk equation. An example is the 2023 U.S. National Cybersecurity Strategy, which argues that “[w]e must hold the stewards of our data accountable for the protection of personal data; drive the development of more secure connected devices; and reshape laws that govern liability for data losses and harm caused by cybersecurity errors, software vulnerabilities, and other risks created by software and digital technologies.” What exact form such liability should take is up for debate. The defect model of products liability law is one clear option, and courts across the United States have already been applying it using both strict liability and risk utility framings in a variety of cases. This Article delves into the debates by considering how other cyber powers around the world—including the European Union—are extending products liability law to cover software, and it examines the lessons these efforts hold for U.S. policymakers with case studies focusing on liability for AI-generated content and Internet-connected critical infrastructure
Can AI, as Such, Invade Your Privacy? An Experimental Study of the Social Element of Surveillance
The increasing use of AI rather than human surveillance puts pressure on two long-used cultural and (sometimes) legal distinctions: as between human and machine observers and as between content and metadata. Machines do more and more watching through advancing technology, rendering AI a plausible replacement for humans in surveillance tasks. Further, machines can commit to surveil only certain forms of information in a way that humans cannot, rendering the distinction between content and metadata increasingly relevant too for crafting privacy law and policy. Yet despite the increasing importance of these distinctions, their legal importance remains in four key domains of privacy law: Fourth Amendment law, wiretap law, consumer privacy law, and the privacy torts. Given the failure of privacy law to settle conclusively the import of the human/AI and content/metadata distinctions, this Article proposes looking to empirical measures of the judgments of ordinary people to better understand whether and how such distinctions should be made if law is to be responsive to reasonable expectations of privacy.
There is incomplete empirical evidence as to whether the AI/human surveillance and content/metadata distinctions hold weight for ordinary people, and if so, how. To address this empirical gap, this Article presents the results of a vignette study carried out on a large (N = 1000), demographically representative sample of Americans to elicit their judgments of a state surveillance program that collected either content or metadata and in which potential surveillants could be either human or AI. Unsurprisingly, AI surveillance was judged to be more privacy preserving than human surveillance, empirically buttressing the importance of a human/AI distinction. However, the perceived privacy advantage for an AI surveillant was not a dispositive factor in stated preferences regarding technology use. Accuracy—a factor rarely discussed in defenses of state surveillance —was more influential than privacy in determining participants’ preferences for a human or AI surveillant. Further, the scope of information surveilled (content or metadata) strongly influenced accuracy judgments in comparing human and AI systems and shifted surveillance policy preferences as between human and AI surveillants. The empirical data therefore show that the distinction between content and metadata is important to ordinary people, and that this distinction can lead to unexpected outcomes, such as a preference for human rather than AI surveillance when contents of communications are collected
Prescribing a Balance: Sustaining Environmental Health with Pharmaceutical Interest in Puerto Rico
Puerto Rico, often referred to as the “Medical Cabinet of the U.S.A.,” is a hub for pharmaceutical manufacturing, contributing significantly to the American medical supply chain and Puerto Rico’s economy. However, decades of industrial activity, compounded by climate events like Hurricane Maria, have led to severe environmental damage, particularly through groundwater contamination and damaged Superfund sites. This Note examines the historical intersection of economic incentives and environmental neglect in Puerto Rico, focusing on the pharmaceutical industry’s impact. By critically analyzing the Superfund program and proposing reforms, this Note advocates for a balanced approach: introducing proactive environmental protections and financial incentives for compliant pharmaceutical manufacturers. Such measures aim to sustain pharmaceutical investments while safeguarding Puerto Rico’s fragile environment, public health, and long-term economic resilience
Unconcerned and Undertrained: The Indiana Jail Death Epidemic and the Need for Expanded Jail Officer Training
On October 4, 2018, Jerod Draper lost his life after two hours of torture by Harrison County jail officers. While in custody of the jail and suffering from an overdose, Jerod Draper was placed in a restraint chair for two hours and tased seven times in fifteen minutes. Jerod Draper’s story is one of the many stories demonstrating how a jail death epidemic is occurring throughout Indiana. In this Note, I discuss the history of incarceration in the United States, the statutes under which families of jail death victims can sue, and Indiana’s jail death problem. I then highlight Indiana’s problematic jail officer training requirements, as well as the composition of the Indiana Law Enforcement Training Board, the entity responsible for designing and implementing jail officer training. I also present an analysis of various states’ statutory training requirements for jail officers and correctional officers. Lastly, in an effort to combat the Indiana jail epidemic, I propose two changes. First, I propose changes to Indiana’s statutory training requirements for jail officers that would require continuing education and require jail officers to complete at least part of their training before beginning in their roles. Second, I propose the addition of new voices to the Indiana Law Enforcement Training Board, specifically the voices of medical professionals and former inmates
Free taxpayer assistance offered at Maurer School of Law through March
Qualifying local taxpayers will have a helping hand navigating federal and state tax returns this spring, as the Volunteer Income Tax Assistance (VITA) program will once again offer services at the Indiana University Maurer School of Law.
Both U.S. and certain international taxpayers are eligible to utilize the services, which will run on Monday and Tuesdays from 6:30-9:30 p.m. beginning January 27 and running through March 25. Services will be available on a first-come, first-served basis in Room 121 on the first floor of the Law School (211 South Indiana Avenue).
VITA services will not be available the week of spring break (March 17-18)