9,346 research outputs found

    The invisible power of fairness. How machine learning shapes democracy

    Full text link
    Many machine learning systems make extensive use of large amounts of data regarding human behaviors. Several researchers have found various discriminatory practices related to the use of human-related machine learning systems, for example in the field of criminal justice, credit scoring and advertising. Fair machine learning is therefore emerging as a new field of study to mitigate biases that are inadvertently incorporated into algorithms. Data scientists and computer engineers are making various efforts to provide definitions of fairness. In this paper, we provide an overview of the most widespread definitions of fairness in the field of machine learning, arguing that the ideas highlighting each formalization are closely related to different ideas of justice and to different interpretations of democracy embedded in our culture. This work intends to analyze the definitions of fairness that have been proposed to date to interpret the underlying criteria and to relate them to different ideas of democracy.Comment: 12 pages, 1 figure, preprint version, submitted to The 32nd Canadian Conference on Artificial Intelligence that will take place in Kingston, Ontario, May 28 to May 31, 201

    The Invisible Power of Fairness. How Machine Learning Shapes Democracy

    Get PDF
    Many machine learning systems make extensive use of large amounts of data regarding human behaviors. Several researchers have found various discriminatory practices related to the use of human-related machine learning systems, for example in the field of criminal justice, credit scoring and advertising. Fair machine learning is therefore emerging as a new field of study to mitigate biases that are inadvertently incorporated into algorithms. Data scientists and computer engineers are making various efforts to provide definitions of fairness. In this paper, we provide an overview of the most widespread definitions of fairness in the field of machine learning, arguing that the ideas highlighting each formalization are closely related to different ideas of justice and to different interpretations of democracy embedded in our culture. This work intends to analyze the definitions of fairness that have been proposed to date to interpret the underlying criteria and to relate them to different ideas of democracy

    The Invisible Power of Fairness. How Machine Learning Shapes Democracy

    Get PDF
    Many machine learning systems make extensive use of large amounts of data regarding human behaviors. Several researchers have found various discriminatory practices related to the use of human-related machine learning systems, for example in the field of criminal justice, credit scoring and advertising. Fair machine learning is therefore emerging as a new field of study to mitigate biases that are inadvertently incorporated into algorithms. Data scientists and computer engineers are making various efforts to provide definitions of fairness. In this paper, we provide an overview of the most widespread definitions of fairness in the field of machine learning, arguing that the ideas highlighting each formalization are closely related to different ideas of justice and to different interpretations of democracy embedded in our culture. This work intends to analyze the definitions of fairness that have been proposed to date to interpret the underlying criteria and to relate them to different ideas of democracy

    Gendered AI: German news media discourse on the future of work

    Full text link
    In recent years, there has been a growing public discourse regarding the influence AI will have on the future of work. Simultaneously, considerable critical attention has been given to the implications of AI on gender equality. Far from making precise predictions about the future, this discourse demonstrates that new technologies are instances for renegotiating the relation of gender and work. This paper examines how gender is addressed in news media discourse on AI and the future of work, focusing on Germany. We approach this question from a perspective of feminist technology studies and discourse analysis, exploring a corpus of 178 articles from 2015 to 2021 from German newspapers and newsmagazines. The findings indicate that critical AI and gender knowledge circulates in public discourse in the form of specific discursive frames, thematizing algorithmic bias, automatization and enhancement, and gender stereotypes. As a result, we show that, first, the discourse takes up feminist and scholarly discourse on gender and discusses AI in a way that is informed by social constructivism and standpoint theories. Second, gender appears as a—to some extent intersectional—diversity category which is critical to AI, while at the same time omitting important perspectives. Third, it can be shown that there is a renegotiating of the ideal worker norm taking place, and finally, we argue that the gendered frame of the powerful men developer responsible for AI’s risk is a concept to be challenged

    Oppressive Things

    Get PDF
    In analyzing oppressive systems like racism, social theorists have articulated accounts of the dynamic interaction and mutual dependence between psychological components, such as individuals’ patterns of thought and action, and social components, such as formal institutions and informal interactions. We argue for the further inclusion of physical components, such as material artifacts and spatial environments. Drawing on socially situated and ecologically embedded approaches in the cognitive sciences, we argue that physical components of racism are not only shaped by, but also shape psychological and social components of racism. Indeed, while our initial focus is on racism and racist things, we contend that our framework is also applicable to other oppressive systems, including sexism, classism, and ableism. This is because racist things are part of a broader class of oppressive things, which are material artifacts and spatial environments that are in congruence with an oppressive system

    Framing TRUST in Artificial Intelligence (AI) Ethics Communication: Analysis of AI Ethics Guiding Principles through the Lens of Framing Theory

    Get PDF
    With the fast proliferation of Artificial Intelligence (AI) technologies in our society, several corporations, governments, research institutions, and NGOs have produced and published AI ethics guiding documents. These include principles, guidelines, frameworks, assessment lists, training modules, blogs, and principle-to-practice strategies. The priorities, focus, and articulation of these innumerable documents vary to different extents. Though they all aim and claim to ensure AI usage for the common good, the actual AI system outcomes in various social applications have invigorated ethical dilemmas and scholarly debates. This study presents the analysis of AI ethics principles and guidelines text published by three pioneers from three different sectors - Microsoft Corporation, National Institute of Standards and Technology (NIST), AI HLEG set up by the European Commission through the lens of media and communication’s Framing Theory. The TRUST Framings extracted from recent academic AI literature are used as standard construct to study the ethics framings in the selected text. The institutional framing of AI principles and guidelines shapes the AI ethics of an institution in a soft (as there is no legal binding) but strong (incorporating their respective position/societal role’s priorities) way. The AI principles’ framing approach directly relates to the AI actor’s ethics that enjoins risk mitigation and problem resolution associated with AI development and deployment cycle. Thus, it has become important to examine institutional AI ethics communication. This paper brings forth a Comm-Tech perspective around the ethics of evolving technologies known under the umbrella term - Artificial Intelligence and the human moralities governing them

    Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning

    Get PDF
    This paper uses frame analysis to examine recent high-profile values statements endorsing ethical design for artificial intelligence and machine learning (AI/ML). Guided by insights from values in design and the sociology of business ethics, we uncover the grounding assumptions and terms of debate that make some conversations about ethical design possible while forestalling alternative visions. Vision statements for ethical AI/ML co-opt the language of some critics, folding them into a limited, technologically deterministic, expert-driven view of what ethical AI/ML means and how it might work
    • 

    corecore