148 research outputs found

    Discrimination in the Age of Algorithms

    Get PDF

    Governing by Algorithm? No Noise and (Potentially) Less Bias

    Get PDF
    As intuitive statisticians, human beings suffer from identifiable biases—cognitive and otherwise. Human beings can also be “noisy” in the sense that their judgments show unwanted variability. As a result, public institutions, including those that consist of administrative prosecutors and adjudicators, can be biased, noisy, or both. Both bias and noise produce errors. Algorithms eliminate noise, and that is important; to the extent that they do so, they prevent unequal treatment and reduce errors. In addition, algorithms do not use mental shortcuts; they rely on statistical predictors, which means that they can counteract or even eliminate cognitive biases. At the same time, the use of algorithms by administrative agencies raises many legitimate questions and doubts. Among other things, algorithms can encode or perpetuate discrimination, perhaps because their inputs are based on discrimination, or perhaps because what they are asked to predict is infected by discrimination. But if the goal is to eliminate discrimination, properly constructed algorithms nonetheless have a great deal of promise for administrative agencies

    First Steps Towards an Ethics of Robots and Artificial Intelligence

    Get PDF
    This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognise that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities

    FAIR2: A framework for addressing discrimination bias in social data science

    Full text link
    [EN] Building upon the FAIR principles of (meta)data (Findable, Accessible, Interoperable and Reusable) and drawing from research in the social, health, and data sciences, we propose a framework -FAIR2 (Frame, Articulate, Identify, Report) - for identifying and addressing discrimination bias in social data science. We illustrate how FAIR2 enriches data science with experiential knowledge, clarifies assumptions about discrimination with causal graphs and systematically analyzes sources of bias in the data, leading to a more ethical use of data and analytics for the public interest. FAIR2 can be applied in the classroom to prepare a new and diverse generation of data scientists. In this era of big data and advanced analytics, we argue that without an explicit framework to identify and address discrimination bias, data science will not realize its potential of advancing social justice.This work was generously funded by grant #015865 from the Public Interest Technology University Network - New America Foundation.Richter, F.; Nelson, E.; Coury, N.; Bruckman, L.; Knighton, S. (2023). FAIR2: A framework for addressing discrimination bias in social data science. Editorial Universitat Politècnica de València. 327-335. https://doi.org/10.4995/CARMA2023.2023.1640032733

    Algorithmic Discrimination in Europe:Challenges and Opportunities for Gender Equality and Non-Discrimination Law

    Get PDF
    This report investigates how algorithmic discrimination challenges the set of legal guarantees put in place in Europe to combat discrimination and ensure equal treatment. More specifically, it examines whether and how the current gender equality and non-discrimination legislative framework in place in the EU can adequately capture and redress algorithmic discrimination. It explores the gaps and weaknesses that emerge at both the EU and national levels from the interaction between, on the one hand, the specific types of discrimination that arise when algorithms are used in decision-making systems and, on the other, the particular material and personal scope of the existing legislative framework. This report also maps out the existing legal solutions, accompanying policy measures and good practice to address and redress algorithmic discrimination both at EU and national levels. Moreover, this report proposes its own integrated set of legal, knowledge-based and technological solutions to the problem of algorithmic discrimination

    Beyond More Accurate Algorithms: Takeaways from \u3cem\u3eMcCleskey\u3c/em\u3e Revisited

    Get PDF
    A Review of McCleskey v. Kemp. By Mario Barnes, in Critical Race Judgments: Rewritten U.S. Court Opinions on Race and the Law 557, 581. Edited by Bennett Capers, Devon W. Carbado, R.A. Lenhardt and Angela Onwuachi-Willig

    Blessing or Curse: Impact of Algorithmic Trading Bots Invasion of the Cryptocurrency Market

    Get PDF
    In this paper, we investigate the impact of the absence of trading bots on human traders’ investment returns. Using comprehensive data set obtained from a large cryptocurrency exchange platform, we find that trading bots play a market-making role, and they boost human traders’ investment returns. We use the natural experiment setting that transforms a heterogenous market co-created with trading bots and human traders into a human-only financial market for empirical design. This paper extends the traditional investment decision under uncertainty by considering human attitudes toward algorithms while providing significant contributions to policymakers and regulators by providing empirical evidence on trading bots
    corecore