13 research outputs found

    Whose Side are Ethics Codes On? Power, Responsibility and the Social Good

    Full text link
    The moral authority of ethics codes stems from an assumption that they serve a unified society, yet this ignores the political aspects of any shared resource. The sociologist Howard S. Becker challenged researchers to clarify their power and responsibility in the classic essay: Whose Side Are We On. Building on Becker's hierarchy of credibility, we report on a critical discourse analysis of data ethics codes and emerging conceptualizations of beneficence, or the "social good", of data technology. The analysis revealed that ethics codes from corporations and professional associations conflated consumers with society and were largely silent on agency. Interviews with community organizers about social change in the digital era supplement the analysis, surfacing the limits of technical solutions to concerns of marginalized communities. Given evidence that highlights the gulf between the documents and lived experiences, we argue that ethics codes that elevate consumers may simultaneously subordinate the needs of vulnerable populations. Understanding contested digital resources is central to the emerging field of public interest technology. We introduce the concept of digital differential vulnerability to explain disproportionate exposures to harm within data technology and suggest recommendations for future ethics codes.Comment: Conference on Fairness, Accountability, and Transparency (FAT* '20), January 27-30, 2020, Barcelona, Spain. Correcte

    Assessing the use of critical literacies in mis/disinformation literacy instruction

    Get PDF
    In keeping with Freire\u27s Pedagogy of the Oppressed and the theoretical perspicacity of Critical Race Theory, Lenoir and Anderson (2023) posit “technical solutions to political problems are bound to fail. Historical, structural, and political inequality—and especially race, ethnicity, and social difference—needs to be at the forefront of our understanding of politics and, indeed, disinformation”. The approaches to mis/disinformation in libraries and information studies have largely been grounded in two forms of literacy education; media literacy and digital literacy. Both media literacy and digital literacy offer a limited generic framing for engaging with digital information and myriad technology and fall short of providing the acute awareness of the systemic relationship that media and digital information platforms have with interlocking systems of oppression. This paper intends to identify the current application of critical approaches to disinformation literacy instruction to promote its adoption as a pedagogical practice in libraries and information studies

    Future imperfect: How AI developers imagine the future

    Get PDF
    This study questions how AI developers consider the potential consequences of their work. It proposes an imagined futures perspective to understand how AI developers imagine the futures associated with AI. It examines qualitatively the case of some AI developers and their work and find that they consider the future consequences of the AI they participate in developing as tangential – i.e., loosely connected to what they do - or integral – i.e., closely associated with what they do - to their work. These imaginations of the future are in tension, prompting some AI developers to work at connecting them as they adjust how they view the future and their work. This study reveals how AI development relies upon distinctive imaginations of the future, illuminates how practitioners engage speculatively with the future, and explains the importance for IT development of developers’ answers to what their work may do in the future

    Reframing data ethics in research methods education: a pathway to critical data literacy

    Get PDF
    This paper presents an ethical framework designed to support the development of critical data literacy for research methods courses and data training programmes in higher education. The framework we present draws upon our reviews of literature, course syllabi and existing frameworks on data ethics. For this research we reviewed 250 research methods syllabi from across the disciplines, as well as 80 syllabi from data science programmes to understand how or if data ethics was taught. We also reviewed 12 data ethics frameworks drawn from different sectors. Finally, we reviewed an extensive and diverse body of literature about data practices, research ethics, data ethics and critical data literacy, in order to develop a transversal model that can be adopted across higher education. To promote and support ethical approaches to the collection and use of data, ethics training must go beyond securing informed consent to enable a critical understanding of the techno-centric environment and the intersecting hierarchies of power embedded in technology and data. By fostering ethics as a method, educators can enable research that protects vulnerable groups and empower communities

    Public Values and Technological Change: Mapping how Municipalities Grapple with Data Ethics

    Get PDF
    Local governments in the Netherlands are increasingly undertaking data projects for public management. While the emergence of data practices and the application of algorithms for decision making in public management have led to a growing critical commentary, little actual empirical research has been conducted. Over the past few years, we have developed a research method that enables researchers to enter organisations not merely as researchers but also as experts on data ethics. Through participatory and ethnographic observation, the DEDA (Data Ethics Decision Aid) gives us special insight into ethics in local government. Where most research has focused on the theoretical aspects of data ethics, our approach offers a new perspective on data practices, by looking at how data ethics is done in public management. Our research provides insight into the state of data awareness within organisations that are mostly portrayed—within critical data studies—as homogeneous and monolithic entities. The distinct method developed at Utrecht Data School allows researchers to immerse themselves within organisations and closely observe data practices, discourses on ethics, and how organisations address challenges that arise as a consequence of datafication. For the purpose of this chapter, we analyse our field work with the DEDA through the lens of Mark Moore’s Strategic Triangle of Public Value. We show how our field work can give insight into how the three angles of the strategic triangle are shaped in practice. From this analysis we draw three conclusions. First, that ethical awareness of data projects is often low because data literacy among civil servants is limited. Second, that by not recognising the choices civil servants have to make as ethical or political choices, they can make decisions that go beyond their mandate. Third, that there is a dangerous tendency where ethical deliberation is sometimes seen as an obnoxious bureaucratic box ticking exercise, instead of being considered as a vital part of the design and build-up of a data project

    Algorithmic pollution: making the invisible visible

    Get PDF
    In this paper, we focus on the growing evidence of unintended harmful societal effects of automated algorithmic decision-making (AADM) in transformative services (e.g., social welfare, healthcare, education, policing and criminal justice), for individuals, communities and society at large. Drawing from the long-established research on social pollution, in particular its contemporary ‘pollution-as-harm’ notion, we put forward a claim - and provide evidence - that these harmful effects constitute a new type of digital social pollution, which we name ‘algorithmic pollution’. Words do matter, and by using the term ‘pollution’, not as a metaphor or an analogy, but as a transformative redefinition of the digital harm performed by AADM, we seek to make it visible and recognized. By adopting a critical performative perspective, we explain how the execution of AADM produces harm and thus performs algorithmic pollution. Recognition of the potential for unintended harmful effects of algorithmic pollution, and their examination as such, leads us to articulate the need for transformative actions to prevent, detect, redress, mitigate, and educate about algorithmic harm. These actions, in turn, open up new research challenges for the information systems community

    Algorithmic Pollution - Making the Invisible Visible

    Full text link
    In this paper, we focus on the growing evidence of unintended harmful societal effects of automated algorithmic decision-making (AADM) in transformative services (e.g., social welfare, healthcare, education, policing and criminal justice), for individuals, communities and society at large. Drawing from the long-established research on social pollution, in particular its contemporary ‘pollution-as-harm’ notion, we put forward a claim - and provide evidence - that these harmful effects constitute a new type of digital social pollution, which we name ‘algorithmic pollution’. Words do matter, and by using the term ‘pollution’, not as a metaphor or an analogy, but as a transformative redefinition of the digital harm performed by AADM, we seek to make it visible and recognized. By adopting a critical performative perspective, we explain how the execution of AADM produces harm and thus performs algorithmic pollution. Recognition of the potential for unintended harmful effects of algorithmic pollution, and their examination as such, leads us to articulate the need for transformative actions to prevent, detect, redress, mitigate, and educate about algorithmic harm. These actions, in turn, open up new research challenges for the information systems community

    Algorithmic Pollution: Making the Invisible Visible

    Full text link
    In this paper, we focus on the growing evidence of unintended harmful societal effects of automated algorithmic decision-making (AADM) in transformative services (e.g., social welfare, healthcare, education, policing and criminal justice), for individuals, communities and society at large. Drawing from the long-established research on social pollution, in particular its contemporary ‘pollution-as-harm’ notion, we put forward a claim, and provide evidence, that these harmful effects constitute a new type of digital social pollution, which we name ‘algorithmic pollution’. Words do matter, and by using the term ‘pollution’, not as a metaphor, but as a transformative redefinition of the digital harm performed by AADM, we seek to make it visible and recognized. By adopting a critical performative perspective, we explain how the execution of AADM produces harm and thus performs algorithmic pollution. Recognition of the potential for unintended harmful effects of algorithmic pollution, and their examination as such, leads us to articulate the need for transformative actions to prevent, detect, redress, mitigate, and educate about algorithmic harm. These actions, in turn, open up new research challenges for the information systems community. </jats:p

    Reframing data ethics in research methods education: a pathway to critical data literacy

    Get PDF
    This paper presents an ethical framework designed to support the development of critical data literacy for research methods courses and data training programmes in higher education. The framework we present draws upon our reviews of literature, course syllabi and existing frameworks on data ethics. For this research we reviewed 250 research methods syllabi from across the disciplines, as well as 80 syllabi from data science programmes to understand how or if data ethics was taught. We also reviewed 12 data ethics frameworks drawn from different sectors. Finally, we reviewed an extensive and diverse body of literature about data practices, research ethics, data ethics and critical data literacy, in order to develop a transversal model that can be adopted across higher education. To promote and support ethical approaches to the collection and use of data, ethics training must go beyond securing informed consent to enable a critical understanding of the techno-centric environment and the intersecting hierarchies of power embedded in technology and data. By fostering ethics as a method, educators can enable research that protects vulnerable groups and empower communities
    corecore