863 research outputs found

    Privacy Rights, Public Policy, and the Employment Relationship

    Get PDF

    Data-Driven Discrimination at Work

    Get PDF
    A data revolution is transforming the workplace. Employers are increasingly relying on algorithms to decide who gets interviewed, hired, or promoted. Although data algorithms can help to avoid biased human decision-making, they also risk introducing new sources of bias. Algorithms built on inaccurate, biased, or unrepresentative data can produce outcomes biased along lines of race, sex, or other protected characteristics. Data mining techniques may cause employment decisions to be based on correlations rather than causal relationships; they may obscure the basis on which employment decisions are made; and they may further exacerbate inequality because error detection is limited and feedback effects compound the bias. Given these risks, I argue for a legal response to classification bias — a term that describes the use of classification schemes, like data algorithms, to sort or score workers in ways that worsen inequality or disadvantage along the lines or race, sex, or other protected characteristics. Addressing classification bias requires fundamentally rethinking antidiscrimination doctrine. When decision-making algorithms produce biased outcomes, they may seem to resemble familiar disparate impact cases; however, mechanical application of existing doctrine will fail to address the real sources of bias when discrimination is data-driven. A close reading of the statutory text suggests that Title VII directly prohibits classification bias. Framing the problem in terms of classification bias leads to some quite different conclusions about how to apply the antidiscrimination norm to algorithms, suggesting both the possibilities and limits of Title VII’s liability-focused model

    Introduction-The Family and Medical Leave Act of 1993: Ten Years of Experience

    Get PDF
    Which of the FMLA’s articulated purposes one views as central will critically affect one’s assessment of the statute. The contributions to this Symposium differ from one another not only in the methodologies they employ to assess the impact of the FMLA, but also in their assumptions about the primary goal of the FMLA. From these varying perspectives, they offer differing responses to such questions as: Has the law met expectations? In what ways has it fallen short and why? And what further needs to be done to meet the goals the FMLA was intended to advance? Law professors Joanna Grossman and Michael Selmi each evaluate the FMLA in terms of its contribution—or lack thereof—to gender equality in the workplace. From this perspective, the law appears mostly a symbolic gesture, one short on substance and of little practical utility in promoting actual equality in the workplace. Prior to passage of the FMLA, a patchwork of state laws and voluntary employer initiatives determined the availability and terms of leave for the individual worker. In this world, women typically took leave when they had a child; men rarely did so. Although the FMLA formally includes men its leave protections, the basic pattern of leave-taking for family care has hardly changed in ten years. To the extent, then, that the FMLA was intended to combat gender stereotypes and reduce discrimination against women, it has not accomplished these goals

    Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action

    Get PDF
    The growing use of predictive algorithms is increasing concerns that they may discriminate, but mitigating or removing bias requires designers to be aware of protected characteristics and take them into account. If they do so, however, will those efforts be considered a form of discrimination? Put concretely, if model-builders take race into account to prevent racial bias against Black people, have they then engaged in discrimination against white people? Some scholars assume so and seek to justify those practices under existing affirmative action doctrine. By invoking the Court’s affirmative action jurisprudence, however, they implicitly assume that these practices entail discrimination against white people and require special justification. This Article argues that these scholars have started the analysis in the wrong place. Rather than assuming, we should first ask whether particular race-aware strategies constitute discrimination at all. Despite rhetoric about colorblindness, some forms of race consciousness are widely accepted as lawful. Because creating an algorithm is a complex, multi-step process involving many choices, tradeoffs and judgment calls, there are many different ways a designer might take race into account, and not all of these strategies entail discrimination against white people. Only if a particular strategy is found to discriminate is it necessary to scrutinize it under affirmative action doctrine. Framing the analysis in this way matters, because affirmative action doctrine imposes a heavy legal burden of justification. In addition, treating all race-aware algorithms as a form of discrimination reinforces the false notion that leveling the playing field for disadvantaged groups somehow disrupts the entitlements of a previously advantaged group. It also mistakenly suggests that prior to considering race, algorithms are neutral processes that uncover some objective truth about merit or desert, rather than properly understanding them as human constructs that reflect the choices of their creators

    Beyond Principal-Agent Theories: Law and the Judicial Hierarchy

    Get PDF

    Cynicism, Reconsidered

    Get PDF
    Occasionally I encounter among students the suspicion that law is nothing but politics. In the field in which I teach-employment law-this attitude amounts to the cynical belief that if judges (or at least those judges hostile to workers) can possibly find a way for the employee to lose and the employer to win, they will do so. While I must admit my own misgivings about the extent to which doctrine and precedent actually decide cases, I push these students to try to understand case outcomes as something other than pure politics. Believing that it is important for law students, as future practitioners, to learn the law as articulated by courts and legislatures, and to master the style of argument they employ, I point out underlying doctrinal structures and highlight the ways in which judicial decision-making, though perhaps not fully determined, is at least constrained. Sometimes, however, it\u27s hard to argue with the cynics. Let me explain

    Market Norms and Constitutional Values in the Government Workplace

    Get PDF

    Beyond Principal-Agent Theories: Law and the Judicial Hierarchy

    Get PDF

    Electronic Privacy and Employee Speech

    Get PDF
    The boundary between work and private life is blurring as a result of changes in the organization of work and advances in technology. Current privacy law is ill-equipped to address these changes and as a result, employees\u27 privacy in their electronic communications is only weakly protected from employer scrutiny. At the same time, the law increasingly protects certain socially valued forms of employee speech. In particular, collective speech, speech that enforces workplace regulations and speech that deters or reports employer wrong-doing are explicitly protected by law from employer reprisals. These two developments—weak protection of employee privacy and increased protection for some socially valued forms of employee speech—are at odds because privacy and speech are closely connected. As privacy scholars have emphasized, protecting privacy promotes speech values by granting individuals space to explore and test new ideas, and to associate with like-minded others-activities that are often important precursors to public speech. Similarly, in the workplace context, some measure of privacy to explore ideas and communicate with others may be necessary to ensure that employees actually speak out in socially valued ways. Ironically, then, the law is simultaneously expecting more from employee speech and protecting employee privacy less, even though the latter may be necessary to produce the former
    • …
    corecore