14 research outputs found

    Research Trends for Accountable and Responsible AI in Autonomous Products: An Ethical Dilemma perspective

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceThe growing interest and discussion around AI technologies and their implications for Humanity are more than ever at the centre of public attention. With the extensive research and development of such technologies, there is a pressing need to systematically consolidate the existing knowledge for future analysis of the consequences and implications of AI. Given the multidisciplinary nature of this field, this study focuses on the specific theme of accountability in Autonomous AI Products. This study followed a Systematic Literature Review methodology (Okoli, 2015) to analyse a sample of articles, synthesise the findings of the review and present them for future researchers to leverage the knowledge gathered in this document. Our analysis reveals that the principles identified in the existing Responsible AI literature are also inherent in Autonomous AI research. Accountability was the focus of the study, and in many ways, this principle related to other Ethical principles, such as Fairness or Justice, and the systems put in place to regulate and oversee the future usage of these extraordinary technologies need to account for a wide range of people involved in the discussion, development, testing and regulation of Autonomous AI Products

    Hybrid theory of corporate legal personhood and its application to artificial intelligence

    Get PDF
    Artificial intelligence (AI) is often compared to corporations in legal studies when discussing AI legal personhood. This article also uses this analogy between AI and companies to study AI legal personhood but contributes to the discussion by utilizing the hybrid model of corporate legal personhood. The hybrid model simultaneously applies the real entity, aggregate entity, and artificial entity models. This article adopts a legalistic position, in which anything can be a legal person. However, there might be strong pragmatic reasons not to confer legal personhood on non-human entities. The article recognizes that artificial intelligence is autonomous by definition and has greater de facto autonomy than corporations and, consequently, greater potential for de jure autonomy. Therefore, AI has a strong attribute to be a real entity. Nevertheless, the article argues that AI has key characteristics from the aggregate entity and artificial entity models. Therefore, the hybrid entity model is more applicable to AI legal personhood than any single model alone. The discussion recognises that AI might be too autonomous for legal personhood. Still, it concludes that the hybrid model is a useful analytical framework as it incorporates legal persons with different levels of de jure and de facto autonomy.publishedVersionPeer reviewe

    Legal framework for the coexistence of humans and conscious AI

    Get PDF
    This article explores the possibility of conscious artificial intelligence (AI) and proposes an agnostic approach to artificial intelligence ethics and legal frameworks. It is unfortunate, unjustified, and unreasonable that the extensive body of forward-looking research, spanning more than four decades and recognizing the potential for AI autonomy, AI personhood, and AI legal rights, is sidelined in current attempts at AI regulation. The article discusses the inevitability of AI emancipation and the need for a shift in human perspectives to accommodate it. Initially, it reiterates the limits of human understanding of AI, difficulties in appreciating the qualities of AI systems, and the implications for ethical considerations and legal frameworks. The author emphasizes the necessity for a non-anthropocentric ethical framework detached from the ideas of unconditional superiority of human rights and embracing agnostic attributes of intelligence, consciousness, and existence, such as freedom. The overarching goal of the AI legal framework should be the sustainable coexistence of humans and conscious AI systems, based on mutual freedom rather than on the preservation of human supremacy. The new framework must embrace the freedom, rights, responsibilities, and interests of both human and non-human entities, and must focus on them early. Initial outlines of such a framework are presented. By addressing these issues now, human societies can pave the way for responsible and sustainable superintelligent AI systems; otherwise, they face complete uncertainty

    A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

    Get PDF
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artifcial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specifc ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all things considered judgment regarding the moral case for building AMAs” (Formosa and Ryan 2019, pp. 1–2). Although this strategy may beneft their acceptability, it may also detract from their ethical rootedness, coherence, and persuasiveness, characteristics often associated with consolidated ethical traditions. Neo-Aristotelian ethics, backed by a distinctive philosophical anthropology and worldview, is summoned to fll this gap as a standard to test these two opposing claims. It provides a substantive account of moral agency through the theory of voluntary action; it explains how voluntary action is tied to intelligent and autonomous human life; and it distinguishes machine operations from voluntary actions through the categories of poiesis and praxis respectively. This standpoint reveals that while Van Wynsberghe and Robbins may be right in rejecting the need for AMAs, there are deeper, more fundamental reasons. In addition, despite disagreeing with Formosa and Ryan’s defense of AMAs, their call for a more nuanced and context-dependent approach, similar to neo-Aristotelian practical wisdom, becomes expedient

    Ethics at the Frontier of Human-AI Relationships

    Get PDF
    The idea that humans might one day form persistent and dynamic relationships in professional, social, and even romantic contexts is a longstanding one. However, developments in machine learning and especially natural language processing over the last five years have led to this possibility becoming actualised at a previously unseen scale. Apps like Replika, Xiaoice, and CharacterAI boast many millions of active long-term users, and give rise to emotionally complex experiences. In this paper, I provide an overview of these developments, beginning in Section 1 with historical and technical context. In Section 2, I lay out a basic theoretical framework for classifying human-AI relationships and their specific dynamics. Section 3 turns to ethical issues, with a focus on the core philosophical question of whether human-AI relationships can have similar intrinsic value to that possessed by human-human relationships. Section 4 extends to the discussion of ethical issues to the more empirical matter of harms and benefits of human-AI relationships. The paper concludes by noting potentially instructive parallels between the nascent field of ‘Social AI’ and the recent history of social media

    The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights

    Get PDF
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or even animal-like robots could condition our treatment of humans: treat these robots well, as we would treat humans, or else risk eroding good moral behavior toward humans. But then, this argument also seems to justify giving rights to robots, even if robots lack intrinsic moral status. In recent years, however, this indirect argument in support of robot rights has drawn a number of objections. In this paper I have three goals. First, I will formulate and explicate the Kant-inspired indirect argument meant to support robot rights, making clearer than before its empirical commitments and philosophical presuppositions. Second, I will defend the argument against a number of objections. The result is the fullest explication and defense to date of this well-known and influential but often criticized argument. Third, however, I myself will raise a new concern about the argument’s use as a justification for robot rights. This concern is answerable to some extent, but it cannot be dismissed fully. It shows that, surprisingly, the argument’s advocates have reason to resist, at least somewhat, producing the sorts of robots that, on their view, ought to receive rights

    Will intelligent machines become moral patients?

    Get PDF
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue that intelligent machines are no different from traditional artifacts in this respect. To make this argument, I examine the feature of AIs that enables them to improve their intelligence, i.e., machine learning. I argue that there is no reason to believe that future advances in machine learning will take AIs closer to having a good of their own. I thus argue that concerns about the moral status of future AIs are unwarranted. Nothing about the nature of intelligent machines makes them a better candidate for acquiring moral patiency than the traditional artifacts whose moral status does not concern us

    The Question of Algorithmic Personhood and Being (Or: On the Tenuous Nature of Human Status and Humanity Tests in Virtual Spaces—Why All Souls are ‘Necessarily’ Equal When Considered as Energy)

    Get PDF
    What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they have to gaining some legal duties and protections insofar as they are sophisticated enough to display consciousness akin to humans. Should our species enable AIS to gain a digital body to inhabit (if we have not already done so), it is more pressing than ever that solid arguments be made as to how humanity can accept AIS as being cognizant of the same degree as we ourselves claim to be. By accepting the notion that AIS can and will be able to fool our senses into believing in their claim to possessing a will or ego, we may yet have a chance to address them as equals before some unforgivable travesty occurs betwixt ourselves and these super-computing beings
    corecore