Indiana University Bloomington

Indiana University Bloomington Maurer School of Law
Not a member yet
    13187 research outputs found

    The Overstated Cost of AI Fairness in Criminal Justice

    Get PDF
    The dominant critique of algorithmic fairness in AI decision-making, particularly in criminal justice, is that increasing fairness reduces the accuracy of predictions, thereby imposing a cost on society. This Article challenges that assumption by empirically analyzing the COMPAS algorithm, a widely used and widely discussed risk assessment tool in the U.S. criminal justice system. This Article makes two contributions. First, it demonstrates that widely used AI models do more than replicate existing biases—they exacerbate them. Using causal inference methods, we show that racial bias is not only present in the COMPAS dataset but also worsened by AI models such as COMPAS. This finding has implications for legal scholarship and policymaking, as it (a) challenges the assumption that AI can offer an objective or neutral improvement over human decision-making and (b) provides counterevidence to the idea that AI merely mirrors preexisting human biases. Second, this Article reframes the debate over the cost of fairness in algorithmic decision-making for criminal justice. It shows that applying fairness constraints does not necessarily lead to a cost in terms of loss in predictive accuracy regarding recidivism. AI systems operationalize concepts such as risk by making implicit and often flawed normative choices about what to predict and how to predict it. The claim that fair AI models decrease accuracy assumes that the model’s prediction is an optimal baseline. Fairness constraints, in fact, can correct distortions introduced by biased outcome variables—which magnify systemic racial disparities in rearrest data rather than reflect actual risk. In some cases, interventions can introduce algorithmic fairness without imposing the cost often presumed in policy discussions. These findings are consequential beyond criminal justice. Similar dynamics exist in AI-driven decision-making in lending, hiring, and housing, where biased outcome variables reinforce systemic inequalities beyond the choices of proxies. By providing empirical evidence that fairness constraints can improve rather than undermine decision-making, this Article advances the conversation on how law and policy should approach AI bias, particularly when algorithmic decisions affect fundamental rights

    Vol. 68, No. 01 (January 13, 2025)

    Get PDF

    “Change is Inevitable”: How the First Amendment Safety-Valve Theory Can Expand Protections for Student Expression

    No full text
    This article challenges the traditional notion that the regulation and protection of student expression in public schools should be based primarily or exclusively on the marketplace theory, which often reinforces the status quo. The safety valve theory is more appropriate and should be applied, especially in the current political and social climate, to inspire an expansion of student speech and press rights that would support expressive activities seeking to change the public discourse around important issues. Students who can speak freely will be more willing to accept decisions that go against them, and a school environment in which passionate or alienated students can let off steam will be more stable and less susceptible to resorts to violence. At the heart of this analysis is the critical distinction between positively and adversely disruptive student expression

    The Case for Contingent Regulatory Sunsets

    Get PDF
    Cost-benefit analysis is at the core of regulatory impact analysis for every proposed rule or regulation and is designed to be a structural constraint on the administrative state. The challenge is ex ante cost-benefit analysis necessarily rests on many assumptions, and much more information is available about a regulation’s impact after it has been implemented. But ex post cost-benefit analysis is ad hoc and infrequent in spite of efforts by numerous presidential administrations to promote regulatory lookbacks. I propose institutionalizing “contingent regulatory sunsets” to ensure that rules and regulations have the positive impact in practice that administrative agencies intended. I show how Congress can consider a spectrum of approaches for independent actors to conduct regulatory lookbacks of economically significant regulations at regular intervals. I explore the merits for centralized legislative branch review (Government Accountability Office), strengthened executive branch review (Office of Information and Regulatory Affairs), agencies themselves, and the creation of a new “Regulatory Lookback” agency to take on this role. While each approach has virtues, I conclude that each agency’s Office of Inspector General (OIG) may be best positioned to build on existing oversight functions to provide periodic review of the impact of regulations. If the OIG’s cost-benefit analysis shows that the regulation’s real-world impact is actually negative, then the agency that issued the rule would face the burden of rescinding, modifying, or providing updated justifications and cost-benefit analysis. The goal is not to cripple the workings of the vast administrative state, but rather to provide systematic, internal accountability. The hope is that overly optimistic assumptions about costs and benefits will be tempered by routine ex post scrutiny and the sunlight of empirical reality. I then lay out quantitative and qualitative limiting principles to show how periodic cost-benefit review of economically significant regulations could be economically and politically feasible. I conclude by proposing a pilot study to measure the efficacy of OIG ex post review of regulations to provide evidence to justify expanding this initiative on an executive branch-wide basis

    Dark Patterns as Disloyal Design

    Get PDF
    Lawmakers have started to regulate “dark patterns,” understood to be design practices meant to influence technology users’ decisions through manipulative or deceptive means. Most agree that dark patterns are undesirable, but open questions remain as to which design choices should be subjected to scrutiny, much less the best way to regulate them. In this Article, we propose adapting the concept of dark patterns to better fit legal frameworks. Critics allege that the legal conceptualizations of dark patterns are overbroad, impractical, and counterproductive. We argue that law and policy conceptualizations of dark patterns suffer from three deficiencies: First, dark patterns lack a clear value anchor for cases to build upon. Second, legal definitions of dark patterns overfocus on individuals and atomistic choices, ignoring de minimis aggregate harms and the societal implications of manipulation at scale. Finally, the law has struggled to articulate workable legal thresholds for wrongful dark patterns. To better regulate the designs called dark patterns, lawmakers need a better conceptual framing that bridges the gap between design theory and the law’s need for clarity, flexibility, and compatibility with existing frameworks. We argue that wrongful self-dealing is at the heart of what most consider to be “dark” about certain design patterns. Taking advantage of design affordances to the detriment of a vulnerable party is disloyal. To that end, we propose disloyal design as a regulatory framing for dark patterns. In drawing from established frameworks that prohibit wrongful self-dealing, we hope to provide more clarity and consistency for regulators, industry, and users. Disloyal design will fit better into legal frameworks and better rally public support for ensuring that the most popular tools in society are built to prioritize human values

    Guggenheim, MacArthur Fellow to address Class of 2025

    Get PDF
    Reginald Dwayne Betts, an internationally recognized poet, legal scholar, educator, and prison reform advocate, will serve as the 2025 Commencement speaker for the graduating classes of the Indiana University Maurer School of Law. The Law School will recognize its graduating students on Saturday, May 10, from 4 to 6 p.m. in the Indiana University Auditorium. “Dwayne Betts has a remarkable story that resonates with audiences around the world,” said Dean Christiana Ochoa. “His journey from incarceration to inspiration is an example of how we can all make positive changes in our lives and make an impact on others. I’m grateful to Aristotle Jones for introducing us to Dwayne and look forward to recognizing Aristotle and the rest of our graduates in May.

    The Chemical Straightjacket: Institutional Over-Use of Psychotropic Drugs on Children in Lieu of Therapeutic Community Mental Health Services

    No full text
    This Article argues that the abysmal state of children’s mental health in America is in part due to an overreliance on and over prescription of psychotropic drugs inside psychiatric residential institutions in lieu of community based mental health services. This overreliance on residential institutions and psychotropic drugs has allowed a new form of chemical restraint to flourish—the chemical straitjacket. This Article uses the medication lists of twelve children in seven different North Carolina psychiatric residential treatment facilities to demonstrate how the chemical straitjacket operates: the prescription of drugs not approved for pediatric populations, counter to evidence-based practices for particular diagnoses, and in combination with a multitude of other drugs for non-therapeutic purposes. This Article then discusses how these chemical straitjackets are caused by state oversight failures and continued state commitments to institutionalization. Ultimately, this Article argues that the chemical straitjacket and its cause, institutionalization, are a breach of children’s rights under the Americans with Disabilities Act and proposes policy and legal solutions to help control these concerning practices

    Can AI, as Such, Invade Your Privacy? An Experimental Study of the Social Element of Surveillance

    Get PDF
    The increasing use of AI rather than human surveillance puts pressure on two long-used cultural and (sometimes) legal distinctions: as between human and machine observers and as between content and metadata. Machines do more and more watching through advancing technology, rendering AI a plausible replacement for humans in surveillance tasks. Further, machines can commit to surveil only certain forms of information in a way that humans cannot, rendering the distinction between content and metadata increasingly relevant too for crafting privacy law and policy. Yet despite the increasing importance of these distinctions, their legal importance remains in four key domains of privacy law: Fourth Amendment law, wiretap law, consumer privacy law, and the privacy torts. Given the failure of privacy law to settle conclusively the import of the human/AI and content/metadata distinctions, this Article proposes looking to empirical measures of the judgments of ordinary people to better understand whether and how such distinctions should be made if law is to be responsive to reasonable expectations of privacy. There is incomplete empirical evidence as to whether the AI/human surveillance and content/metadata distinctions hold weight for ordinary people, and if so, how. To address this empirical gap, this Article presents the results of a vignette study carried out on a large (N = 1000), demographically representative sample of Americans to elicit their judgments of a state surveillance program that collected either content or metadata and in which potential surveillants could be either human or AI. Unsurprisingly, AI surveillance was judged to be more privacy preserving than human surveillance, empirically buttressing the importance of a human/AI distinction. However, the perceived privacy advantage for an AI surveillant was not a dispositive factor in stated preferences regarding technology use. Accuracy—a factor rarely discussed in defenses of state surveillance —was more influential than privacy in determining participants’ preferences for a human or AI surveillant. Further, the scope of information surveilled (content or metadata) strongly influenced accuracy judgments in comparing human and AI systems and shifted surveillance policy preferences as between human and AI surveillants. The empirical data therefore show that the distinction between content and metadata is important to ordinary people, and that this distinction can lead to unexpected outcomes, such as a preference for human rather than AI surveillance when contents of communications are collected

    Prescribing a Balance: Sustaining Environmental Health with Pharmaceutical Interest in Puerto Rico

    Get PDF
    Puerto Rico, often referred to as the “Medical Cabinet of the U.S.A.,” is a hub for pharmaceutical manufacturing, contributing significantly to the American medical supply chain and Puerto Rico’s economy. However, decades of industrial activity, compounded by climate events like Hurricane Maria, have led to severe environmental damage, particularly through groundwater contamination and damaged Superfund sites. This Note examines the historical intersection of economic incentives and environmental neglect in Puerto Rico, focusing on the pharmaceutical industry’s impact. By critically analyzing the Superfund program and proposing reforms, this Note advocates for a balanced approach: introducing proactive environmental protections and financial incentives for compliant pharmaceutical manufacturers. Such measures aim to sustain pharmaceutical investments while safeguarding Puerto Rico’s fragile environment, public health, and long-term economic resilience

    12,257

    full texts

    13,187

    metadata records
    Updated in last 30 days.
    Indiana University Bloomington Maurer School of Law is based in United States
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇