175 research outputs found
The moral-IT deck:A tool for ethics by design
This paper presents the design process and empirical evaluation of a new tool
for enabling ethics by design: The Moral-IT Cards. Better tools are needed to
support the role of technologists in addressing ethical issues during system
design. These physical cards support reflection by technologists on normative
aspects of technology development, specifically on emerging risks, appropriate
safeguards and challenges of implementing these in the system. We discuss how
the cards were developed and tested within 5 workshops with 20 participants
from both research and commercial settings. We consider the role of
technologists in ethics from different EU/UK policymaking initiatives and
disciplinary perspectives (i.e. Science and Technology Studies (STS), IT Law,
Human Computer Interaction (HCI), Computer/Engineering Ethics). We then examine
existing ethics by design tools, and other cards based tools before arguing why
cards can be a useful medium for addressing complex ethical issues. We present
the development process for the Moral-IT cards, document key features of our
card design, background on the content, the impact assessment board process for
using them and how this was formulated. We discuss our study design and
methodology before examining key findings which are clustered around three
overarching themes. These are: the value of our cards as a tool, their impact
on the technology design process and how they structure ethical reflection
practices. We conclude with key lessons and concepts such as how they level the
playing field for debate; enable ethical clustering, sorting and comparison;
provide appropriate anchors for discussion and highlighted the intertwined
nature of ethics.Comment: Governance and Regulation; Design Tools; Responsible Research and
Innovation; Ethics by Design; Games; Human Computer Interaction, Card Based
Tool
The Artificial Conscience of Lethal Autonomous Weapons: Marketing Ruse or Reality?
There are two interwoven trends in cyber-counterterrorism. On the one hand, countries such as Israel and Russia announce the deployment of lethal autonomous weapons. Such weapons constitute the third revolution in warfare, after gunpowder and nuclear arms. On the other hand, researchers try and embed ethics into the design of these weapons (so-called artificial conscience or "ethics by design"). The contention of this paper is that artificial conscience is a mere marketing ruse aimed at making the deployment of lethal autonomous weapons and other autonomous robots acceptable in society. Whereas there are strong reasons to object to this trend, some solutions to the pitfalls of ethics by design have been presented. However, they do not seem viable in a military context. In particular, the so-called customised-ethics approach is applicable only to commercial and civil machines. When deciding whether to kill 600 civilians in order to hit 14 al-Qaeda leaders, which set of values should be implemented? This is a compelling argument for banning lethal autonomous weapons altogether
Ethical User stories : Industrial study
Publisher Copyright: © 2022 Copyright for this paper by its authorsIn Port terminals a progressive change is underway in digitalizing traditional systems to SMART systems with the aid of AI. This study follows one of such progressions, the SMARTER project. SMARTER is a sub research and development project of the Sea for Value program of DIMECC company, Finland to create replicable models for digitalization for future terminals which involves the use of AI enabled tools. AI and Autonomous Systems (AS) are the direction that software systems are taking today. But due to ethical challenges involved in the use of AI systems and increased emphasis on ethical practices in the use and design of AI systems, our study provides an ethical angle, Ethical User Stories (EUS). We use an ethically aligned design tool the ECCOLA method to transfer ethical requirements into EUS for non-functional requirements for an aspect of the logistics system, passenger flow. Over the span of six months, 125 EUS using the ECCOLA method were collected through a series of workshops for the passenger flow use case and the findings are revealed in this paper. This project is in the field of maritime industry and concentrates on digitalization of port terminals and this particular paper focuses on the passenger flow. Results are positive towards the practice of Ethical User Stories.Peer reviewe
Exploring tensions in Responsible AI in practice. An interview study on AI practices in and for Swedish public organizations
The increasing use of Artificial Intelligence (AI) systems has sparked discussions regarding developing ethically responsible technology. Consequently, various organizations have released high-level AI ethics frameworks to assist in AI design. However, we still know too little about how AI ethics principles are perceived and work in practice, especially in public organizations. This study examines how AI practitioners perceive ethical issues in their work concerning AI design and how they interpret and put them into practice. We conducted an empirical study consisting of semi-structured qualitative interviews with AI practitioners working in or for public organizations. Taking the lens provided by the In-Action Ethics framework and previous studies on ethical tensions, we analyzed practitionersâ interpretations of AI ethics principles and their application in practice. We found tensions between practitionersâ interpretation of ethical principles in their work and ethos tensions. In this vein, we argue that understanding the different tensions that can occur in practice and how they are tackled is key to studying ethics in practice. Understanding how AI practitioners perceive and apply ethical principles is necessary for practical ethics to contribute toward an empirically grounded, Responsible AI
Towards an understanding of global brain data governance: ethical positions that underpin global brain data governance discourse
Introduction: The study of the brain continues to generate substantial volumes of data, commonly referred to as âbig brain data,â which serves various purposes such as the treatment of brain-related diseases, the development of neurotechnological devices, and the training of algorithms. This big brain data, generated in different jurisdictions, is subject to distinct ethical and legal principles, giving rise to various ethical and legal concerns during collaborative efforts. Understanding these ethical and legal principles and concerns is crucial, as it catalyzes the development of a global governance framework, currently lacking in this field. While prior research has advocated for a contextual examination of brain data governance, such studies have been limited. Additionally, numerous challenges, issues, and concerns surround the development of a contextually informed brain data governance framework. Therefore, this study aims to bridge these gaps by exploring the ethical foundations that underlie contextual stakeholder discussions on brain data governance. Method: In this study we conducted a secondary analysis of interviews with 21 neuroscientists drafted from the International Brain Initiative (IBI), LATBrain Initiative and the Society of Neuroscientists of Africa (SONA) who are involved in various brain projects globally and employing ethical theories. Ethical theories provide the philosophical frameworks and principles that inform the development and implementation of data governance policies and practices. Results: The results of the study revealed various contextual ethical positions that underscore the ethical perspectives of neuroscientists engaged in brain data research globally. Discussion: This research highlights the multitude of challenges and deliberations inherent in the pursuit of a globally informed framework for governing brain data. Furthermore, it sheds light on several critical considerations that require thorough examination in advancing global brain data governance
Ethically governing artificial intelligence in the field of scientific research and innovation
Artificial Intelligence (AI) has become a double-edged sword for scientific research. While, on one hand, the incredible potential of AI and the different techniques and technologies for using it make it a product coveted by all scientific research centres and organisations and science funding agencies. On the other, the highly negative impacts that its irresponsible and self-interested use is causing, or could cause, make it a controversial tool, attracting strong criticism from those involved in the different sectors of research. This study aims to delve into the current and virtual uses of AI in scientific research and innovation in order to provide guidelines for developing and implementing a governance system to promote ethical and responsible research and innovation in the field of AI
Digital Ethics Canvas: A Guide For Ethical Risk Assessment And Mitigation In The Digital Domain
Ethical concerns in the digital domain are growing with the extremely fast evolution of technology and the increasing scale at which software is deployed, potentially affecting our societies globally. It is crucial that engineers evaluate more systematically the impacts their solutions can have on individuals, groups, societies and the environment. Ethical risk analysis is one of the approaches that can help reduce âethical debtâ, the unpaid cost generated by ethically problematic technical solutions. However, previous research has identified that novices struggle with the identification of risks and their mitigation. Our contribution is a visual tool, the Digital Ethics Canvas, specifically designed to help engineers scan digital solutions for a range of ethical risks with six âlensesâ: beneficence, non-maleficence, privacy, fairness, sustainability and empowerment. In this paper, we present the literature background behind the design of this tool. We also report on preliminary evaluations of the canvas with novices (N=26) and experts (N=16) showing that the tool is perceived as practical and useful, with positive utility judgements from participants
- âŠ