19,193 research outputs found

    Is Algorithmic Affirmative Action Legal?

    Get PDF
    This Article is the first to comprehensively explore whether algorithmic affirmative action is lawful. It concludes that both statutory and constitutional antidiscrimination law leave room for race-aware affirmative action in the design of fair algorithms. Along the way, the Article recommends some clarifications of current doctrine and proposes the pursuit of formally race-neutral methods to achieve the admittedly race-conscious goals of algorithmic affirmative action. The Article proceeds as follows. Part I introduces algorithmic affirmative action. It begins with a brief review of the bias problem in machine learning and then identifies multiple design options for algorithmic fairness. These designs are presented at a theoretical level, rather than in formal mathematical detail. It also highlights some difficult truths that stakeholders, jurists, and legal scholars must understand about accuracy and fairness trade-offs inherent in fairness solutions. Part II turns to the legality of algorithmic affirmative action, beginning with the statutory challenge under Title VII of the Civil Rights Act. Part II argues that voluntary algorithmic affirmative action ought to survive a disparate treatment challenge under Ricci and under the antirace-norming provision of Title VII. Finally, Part III considers the constitutional challenge to algorithmic affirmative action by state actors. It concludes that at least some forms of algorithmic affirmative action, to the extent they are racial classifications at all, ought to survive strict scrutiny as narrowly tailored solutions designed to mitigate the effects of past discrimination

    Big Data Affirmative Action

    Get PDF
    As a vast and ever-growing body of social-scientific research shows, discrimination remains pervasive in the United States. In education, work, consumer markets, healthcare, criminal justice, and more, Black people fare worse than whites, women worse than men, and so on. Moreover, the evidence now convincingly demonstrates that this inequality is driven by discrimination. Yet solutions are scarce. The best empirical studies find that popular interventions—like diversity seminars and antibias trainings—have little or no effect. And more muscular solutions—like hiring quotas or school busing—are now regularly struck down as illegal. Indeed, in the last thirty years, the Supreme Court has invalidated every such ambitious affirmative action plan that it has reviewed. This Article proposes a novel solution: Big Data Affirmative Action. Like old-fashioned affirmative action, Big Data Affirmative Action would award benefits to individuals because of their membership in protected groups. Since Black defendants are discriminatorily incarcerated for longer than whites, Big Data Affirmative Action would intervene to reduce their sentences. Since women are paid less than men, it would step in to raise their salaries. But unlike old-fashioned affirmative action, Big Data Affirmative Action would be automated, algorithmic, and precise. Circa 2021, data scientists are already analyzing rich datasets to identify and quantify discriminatory harm. Armed with such quantitative measures, Big Data Affirmative Action algorithms would intervene to automatically adjust flawed human decisions—correcting discriminatory harm but going no further. Big Data Affirmative Action has two advantages over the alternatives. First, it would actually work. Unlike, say, antibias trainings, Big Data Affirmative Action would operate directly on unfair outcomes, immediately remedying discriminatory harm. Second, Big Data Affirmative Action would be legal, notwithstanding the Supreme Court’s recent case law. As argued here, the Court has not, in fact, recently turned against affirmative action. Rather, it has consistently demanded that affirmative action policies both stand on solid empirical ground and be well tailored to remedying only particularized instances of actual discrimination. The policies that the Court recently rejected have failed to do either. Big Data Affirmative Action can easily do both

    Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action

    Get PDF
    The growing use of predictive algorithms is increasing concerns that they may discriminate, but mitigating or removing bias requires designers to be aware of protected characteristics and take them into account. If they do so, however, will those efforts be considered a form of discrimination? Put concretely, if model-builders take race into account to prevent racial bias against Black people, have they then engaged in discrimination against white people? Some scholars assume so and seek to justify those practices under existing affirmative action doctrine. By invoking the Court’s affirmative action jurisprudence, however, they implicitly assume that these practices entail discrimination against white people and require special justification. This Article argues that these scholars have started the analysis in the wrong place. Rather than assuming, we should first ask whether particular race-aware strategies constitute discrimination at all. Despite rhetoric about colorblindness, some forms of race consciousness are widely accepted as lawful. Because creating an algorithm is a complex, multi-step process involving many choices, tradeoffs and judgment calls, there are many different ways a designer might take race into account, and not all of these strategies entail discrimination against white people. Only if a particular strategy is found to discriminate is it necessary to scrutinize it under affirmative action doctrine. Framing the analysis in this way matters, because affirmative action doctrine imposes a heavy legal burden of justification. In addition, treating all race-aware algorithms as a form of discrimination reinforces the false notion that leveling the playing field for disadvantaged groups somehow disrupts the entitlements of a previously advantaged group. It also mistakenly suggests that prior to considering race, algorithms are neutral processes that uncover some objective truth about merit or desert, rather than properly understanding them as human constructs that reflect the choices of their creators

    Effective affirmative action in school choice

    Full text link
    The prevalent affirmative action policy in school choice limits the number of admitted majority students to give minority students higher chances to attend their desired schools. There have been numerous efforts to reconcile affirmative action policies with celebrated matching mechanisms such as the deferred acceptance and top trading cycles algorithms. Nevertheless, it is theoretically shown that under these algorithms, the policy based on majority quotas may be detrimental to minorities. Using simulations, we find that this is a more common phenomenon rather than a peculiarity. To circumvent the inefficiency caused by majority quotas, we offer a different interpretation of the affirmative action policies based on minority reserves. With minority reserves, schools give higher priority to minority students up to the point that the minorities fill the reserves. We compare the welfare effects of these policies. The deferred acceptance algorithm with minority reserves Pareto dominates the one with majority quotas. Our simulations, which allow for correlations between student preferences and school priorities, indicate that minorities are, on average, better off with minority reserves while adverse effects on majorities are mitigated. © 2013 Isa E. Hafalir, M. Bumin Yenmez, and Muhammed A. Yildirim

    Matching Code and Law: Achieving Algorithmic Fairness with Optimal Transport

    Full text link
    Increasingly, discrimination by algorithms is perceived as a societal and legal problem. As a response, a number of criteria for implementing algorithmic fairness in machine learning have been developed in the literature. This paper proposes the Continuous Fairness Algorithm (CFAθ\theta) which enables a continuous interpolation between different fairness definitions. More specifically, we make three main contributions to the existing literature. First, our approach allows the decision maker to continuously vary between specific concepts of individual and group fairness. As a consequence, the algorithm enables the decision maker to adopt intermediate ``worldviews'' on the degree of discrimination encoded in algorithmic processes, adding nuance to the extreme cases of ``we're all equal'' (WAE) and ``what you see is what you get'' (WYSIWYG) proposed so far in the literature. Second, we use optimal transport theory, and specifically the concept of the barycenter, to maximize decision maker utility under the chosen fairness constraints. Third, the algorithm is able to handle cases of intersectionality, i.e., of multi-dimensional discrimination of certain groups on grounds of several criteria. We discuss three main examples (credit applications; college admissions; insurance contracts) and map out the legal and policy implications of our approach. The explicit formalization of the trade-off between individual and group fairness allows this post-processing approach to be tailored to different situational contexts in which one or the other fairness criterion may take precedence. Finally, we evaluate our model experimentally.Comment: Vastly extended new version, now including computational experiment

    Punishing Artificial Intelligence: Legal Fiction or Science Fiction

    Get PDF
    Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This Article explores potential solutions to this problem, focusing on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement of a guilty mind. Drawing on analogies to corporate and strict criminal liability, as well as familiar imputation principles, we show how a coherent theoretical case can be constructed for AI punishment. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and it would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, are a better solution to AI crime

    How Do Fairness Definitions Fare? Examining Public Attitudes Towards Algorithmic Definitions of Fairness

    Full text link
    What is the best way to define algorithmic fairness? While many definitions of fairness have been proposed in the computer science literature, there is no clear agreement over a particular definition. In this work, we investigate ordinary people's perceptions of three of these fairness definitions. Across two online experiments, we test which definitions people perceive to be the fairest in the context of loan decisions, and whether fairness perceptions change with the addition of sensitive information (i.e., race of the loan applicants). Overall, one definition (calibrated fairness) tends to be more preferred than the others, and the results also provide support for the principle of affirmative action.Comment: To appear at AI Ethics and Society (AIES) 201
    • …
    corecore