26,483 research outputs found

    AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing

    Full text link
    Recently, many AI researchers and practitioners have embarked on research visions that involve doing AI for "Good". This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, I will illustrate challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI. The questions are: What is the problem / What is a problem?, Who defines the problem?, What is the role of knowledge?, and What are important side effects and dynamics? The illustration will use an example from the domain of "AI for Social Good", more specifically "Data Science for Social Good". Even if the importance of these questions may be known at an abstract level, they do not get asked sufficiently in practice, as shown by an exploratory study of 99 contributions to recent conferences in the field. Turning these challenges and pitfalls into a positive recommendation, as a conclusion I will draw on another characteristic of computer-science thinking and practice to make these impediments visible and attenuate them: "attacks" as a method for improving design. This results in the proposal of ethics pen-testing as a method for helping AI designs to better contribute to the Common Good.Comment: to appear in Paladyn. Journal of Behavioral Robotics; accepted on 27-10-201

    Principles alone cannot guarantee ethical AI

    Full text link
    AI Ethics is now a global topic of discussion in academic and policy circles. At least 84 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.Comment: A previous, pre-print version of this paper was entitled 'AI Ethics - Too Principled to Fail?

    Artificial Intelligence and Patient-Centered Decision-Making

    Get PDF
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient

    "Revolution? What Revolution?" Successes and limits of computing technologies in philosophy and religion

    Get PDF
    Computing technologies like other technological innovations in the modern West are inevitably introduced with the rhetoric of "revolution". Especially during the 1980s (the PC revolution) and 1990s (the Internet and Web revolutions), enthusiasts insistently celebrated radical changes— changes ostensibly inevitable and certainly as radical as those brought about by the invention of the printing press, if not the discovery of fire.\ud These enthusiasms now seem very "1990s�—in part as the revolution stumbled with the dot.com failures and the devastating impacts of 9/11. Moreover, as I will sketch out below, the patterns of diffusion and impact in philosophy and religion show both tremendous success, as certain revolutionary promises are indeed kept—as well as (sometimes spectacular) failures. Perhaps we use revolutionary rhetoric less frequently because the revolution has indeed succeeded: computing technologies, and many of the powers and potentials they bring us as scholars and religionists have become so ubiquitous and normal that they no longer seem "revolutionary at all. At the same time, many of the early hopes and promises instantiated in such specific projects as Artificial Intelligence and anticipations of virtual religious communities only have been dashed against the apparently intractable limits of even these most remarkable technologies. While these failures are usually forgotten they leave in their wake a clearer sense of what these new technologies can, and cannot do

    Ethical Perspectives in AI: A Two-folded Exploratory Study From Literature and Active Development Projects

    Get PDF
    Background: Interest in Artificial Intelligence (AI) based systems has been gaining traction at a fast pace, both for software development teams and for society as a whole. This increased interest has lead to the employment of AI techniques such as Machine Learning and Deep Learning for diverse purposes, like medicine and surveillance systems, and such uses have raised the awareness about the ethical implications of the usage of AI systems. Aims: With this work we aim to obtain an overview of the current state of the literature and software projects on tools, methods and techniques used in practical AI ethics. Method: We have conducted an exploratory study in both a scientific database and a software projects repository in order to understand their current state on techniques, methods and tools used for implementing AI ethics. Results: A total of 182 abstracts were retrieved and five classes were devised from the analysis in Scopus, 1) AI in Agile and Business for Requirement Engineering (RE) (22.8%), 2) RE in Theoretical Context (14.8%), 3) Quality Requirements (22.6%), 4) Proceedings and Conferences (22%), 5) AI in Requirements Engineering (17.8%). Furthermore, out of 589 projects from GitHub, we found 21 tools for implementing AI ethics. Highlighted publicly available tools found to assist the implementation of AI ethics are InterpretML, Deon and TransparentAI. Conclusions: The combined energy of both explored sources fosters an enhanced debate and stimulates progress towards AI ethics in practice

    Time for AI (Ethics) maturity model is now

    Get PDF
    Publisher Copyright: Copyright © 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).There appears to be a common agreement that ethical concerns are of high importance when it comes to systems equipped with some sort of Artificial Intelligence (AI). Demands for ethical AI are declared from all directions. As a response, in recent years, public bodies, governments, and universities have rushed in to provide a set of principles to be considered when AI based systems are designed and used. We have learned, however, that high-level principles do not turn easily into actionable advice for practitioners. Hence, also companies are publishing their own ethical guidelines to guide their AI development. This paper argues that AI software is still software and needs to be approached from the software development perspective. The software engineering paradigm has introduced maturity model thinking, which provides a roadmap for companies to improve their performance from the selected viewpoints known as the key capabilities. We want to voice out a call for action for the development of a maturity model for AI software. We wish to discuss whether the focus should be on AI ethics or, more broadly, the quality of an AI system, called a maturity model for the development of AI systems.Peer reviewe
    corecore