18 research outputs found

    The ethics of robot servitude

    Get PDF

    Designing People to Serve

    Get PDF
    I argue that, contrary to intuition, it would be both possible and permissible to design people - whether artificial or organic - who by their nature desire to do tasks we find unpleasant

    How to Include Artificial Bodies as Citizens

    Get PDF
    This essay ponders on the thorny issue of including artificial beings under the category of "citizen." The increasing humanization of the artificial being, it suggests, prevents us from seeing and treating the machine as a being. But if the humanoid robot performs all the functions of a human being, and acquires cultural traits such as emotional intelligence, rational thinking, or altruism, then on what grounds do we deny it the same status as a human person? Conversely, as more and more humans are cyborged, through transplants, implants, and prostheses, resulting in an erasure of their "core" humanity, then what is the difference between such a cyborged human with human rights and an artificial being

    An Ethical Inquiry to Personhood as the Standard for Sexbot Ownership: A Response to S. Petersen

    Get PDF
    In the field of robot ethics, debates about sexbots, their personhood, and their moral status continue. To provide our stance in this debate, we ask the question: Is it unethical for sexbots to be owned? This paper responds to the claims of Steve Petersen’s (2016) paper “Is it good for them too? Ethical concerns for the sexbots”, where he argues that sexbots are not wronged for performing the functions they are designed for. We respond to this claim by arguing for John Danaher’s Theory of Ethical Behaviorism (2020). If ethical behaviorism is correct in claiming that behavior is a sufficient ground for moral status ascription, we see sexbot ownership as unethical. We argue for our claim and show that the moral considerability of the sexbot could be proven under the standards given in our framework for ascribing moral status

    Revealing the ‘face’ of the robot introducting the ethics of Levinas to the field of robo-ethics

    Get PDF
    This paper explore the possibility of a new philosophical turn in robot-ethics, considering whether the concepts of Emanuel Levinas particularly his conception of the ‘face of the other’ can be used to understand how non-expert users interact with robots. The term ‘Robot’ comes from fiction and for non-experts and experts alike interaction with robots may be coloured by this history. This paper explores the ethics of robots (and the use of the term robot) that is based on the user seeing the robot as infinitely complex

    Do we really care about artificial intelligence? A review on social transformations and ethical challenges of AI for the 21st century

    Get PDF
    Although Artificial Intelligence (AI) is based on research from the 20th century, computation and new algorithms have only recently allowed AI to gain momentum and practical applications in society. Examples of such uses include self-driving cars and autonomous robots that are changing society and how we interact. Despite these advances, the discussion about the social transformations and ethical implications of this new reality remains scarce. This chapter reviews the current stance of ethical and social transformation discussions on AI and presents a framework for future developments. The main contributions of the chapter allow researchers to understand the major gaps in research that may be explored further in this topic and allow practitioners to gain a better picture of how AI may change society in the near future and how society should prepare for those changes.info:eu-repo/semantics/acceptedVersio

    Conscious machines: memory, melody and muscular imagination

    Get PDF
    A great deal of effort has been, and continues to be, devoted to developing consciousness artificially (A small selection of the many authors writing in this area includes: Cotterill (J Conscious Stud 2:290–311, 1995, 1998), Haikonen (2003), Aleksander and Dunmall (J Conscious Stud 10:7–18, 2003), Sloman (2004, 2005), Aleksander (2005), Holland and Knight (2006), and Chella and Manzotti (2007)), and yet a similar amount of effort has gone in to demonstrating the infeasibility of the whole enterprise (Most notably: Dreyfus (1972/1979, 1992, 1998), Searle (1980), Harnad (J Conscious Stud 10:67–75, 2003), and Sternberg (2007), but there are a great many others). My concern in this paper is to steer some navigable channel between the two positions, laying out the necessary pre-conditions for consciousness in an artificial system, and concentrating on what needs to hold for the system to perform as a human being or other phenomenally conscious agent in an intersubjectively-demanding social and moral environment. By adopting a thick notion of embodiment—one that is bound up with the concepts of the lived body and autopoiesis (Maturana and Varela 1980; Varela et al. 2003; and Ziemke 2003, 2007a, J Conscious Stud 14(7):167–179, 2007b)—I will argue that machine phenomenology is only possible within an embodied distributed system that possesses a richly affective musculature and a nervous system such that it can, through action and repetition, develop its tactile-kinaesthetic memory, individual kinaesthetic melodies pertaining to habitual practices, and an anticipatory enactive kinaesthetic imagination. Without these capacities the system would remain unconscious, unaware of itself embodied within a world. Finally, and following on from Damasio’s (1991, 1994, 1999, 2003) claims for the necessity of pre-reflective conscious, emotional, bodily responses for the development of an organism’s core and extended consciousness, I will argue that without these capacities any agent would be incapable of developing the sorts of somatic markers or saliency tags that enable affective reactions, and which are indispensable for effective decision-making and subsequent survival. My position, as presented here, remains agnostic about whether or not the creation of artificial consciousness is an attainable goal

    Human Supremacy as Posthuman Risk

    Get PDF
    Human supremacy is the widely held view that human interests ought to be privileged over other interests as a matter of public policy. Posthumanism is an historical and cultural situation characterized by a critical reevaluation of anthropocentrist theory and practice. This paper draws on Rosi Braidotti’s critical posthumanism and the critique of ideal theory in Charles Mills and Serene Khader to address the use of human supremacist rhetoric in AI ethics and policy discussions, particularly in the work of Joanna Bryson. This analysis leads to identifying a set of risks posed by human supremacist policy in a posthuman context, specifically involving the classification of agents by type

    Mechanisms of Techno-Moral Change: A Taxonomy and Overview

    Get PDF
    The idea that technologies can change moral beliefs and practices is an old one. But how, exactly, does this happen? This paper builds on an emerging field of inquiry by developing a synoptic taxonomy of the mechanisms of techno-moral change. It argues that technology affects moral beliefs and practices in three main domains: decisional (how we make morally loaded decisions), relational (how we relate to others) and perceptual (how we perceive situations). It argues that across these three domains there are six primary mechanisms of techno-moral change: (i) adding options; (ii) changing decision-making costs; (iii) enabling new relationships; (iv) changing the burdens and expectations within relationships; (v) changing the balance of power in relationships; and (vi) changing perception (information, mental models and metaphors). The paper also discusses the layered, interactive and second-order effects of these mechanisms.publishedVersio
    corecore