11 research outputs found

    Friends, Robots, Citizens?

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.This paper asks whether and how an artefact, such as a robot, could be considered a citizen. In doing so, it approaches questions of political freedom and artefacts. Three key notions emerge in the discussion: discursivity, embodiment and recognition. Overall, discussion of robot citizenship raises technical, political and philosophical problems. Whereas machine intelligence is hotly debated, machine citizenship is less so. However, much research and activity is underway that seeks to create robot companions, capable of meaningful and intimate relationships with humans. The EU flagship "Robot Companions for Citizens" project aims for "...an ecology of sentient machines that will help and assist humans in the broadest possible sense to support and sustain our welfare." This is a broad and ambitious aim, with a goal of making artefacts that can have genuine relationships with humans. This being so, in order to avoid merely creating highly interactive automata, the status of the robot must be carefully considered. Without significant public freedoms, for instance, the notion of a robot 'friend' would be a dubious one -- as dubious as the notion of a 'willing slave', for instance. In a broad sense, these issues relate to the politics of robot kinship and sociality, perhaps specifically to civic epistemology. With a technological ideal of genuine human-artefactual kinship in the future, these political questions cannot be ignored. One approach to this problematic involves accounting for the robot citizen

    Human Supremacy as Posthuman Risk

    Get PDF
    Human supremacy is the widely held view that human interests ought to be privileged over other interests as a matter of public policy. Posthumanism is an historical and cultural situation characterized by a critical reevaluation of anthropocentrist theory and practice. This paper draws on Rosi Braidotti’s critical posthumanism and the critique of ideal theory in Charles Mills and Serene Khader to address the use of human supremacist rhetoric in AI ethics and policy discussions, particularly in the work of Joanna Bryson. This analysis leads to identifying a set of risks posed by human supremacist policy in a posthuman context, specifically involving the classification of agents by type

    Human supremacy as posthuman risk

    Get PDF
    Human supremacy is the widely held view that human interests ought to be privileged over other interests as a matter of ethics and public policy. Posthumanism is the historical situation characterized by a critical reevaluation of anthropocentrist theory and practice. This paper draws on animal studies, critical posthumanism, and the critique of ideal theory in Charles Mills and Serene Khader to address the appeal to human supremacist rhetoric in AI ethics and policy discussions, particularly in the work of Joanna Bryson. This analysis identifies a specific risk posed by human supremacist policy in a posthuman context, namely the classification of agents by type

    Dolls Who Speak: Sex Robots, Cyborgs and the Image of Woman

    Full text link
    This thesis examines the emerging phenomenon of sex robots from a feminist materialist perspective. I explore the current scholarly and popular debates on sex robots, and suggest a reading of sex robots in their machinic, literary and cinematic expressions to move beyond the moral-ethical impasse that seems to dominate sex robot discussions. Employing Donna Haraway’s “Cyborg Myth” on a methodological and theoretical level, I argue for an interdisciplinary approach to studying sex robots, which proceeds carefully so as to avoid contributing to sex panic, and which thinks critically about what it might mean to assess sex robots from a feminist point of view that does not resort to gender-essentialism, nor the protection of heterosexuality. First, I argue for thinking about sex robots as an “always already new,” medium and proceed by situating sex robots historically. Second, I identify tropes in the configuration of sex robots, juxtapose them with the image of woman as painted by Walter Benjamin in the Arcades Project, and suggest that these sex dolls/bots embody, in an ideal fashion, the characteristics that have been assigned to and made synonymous with heterosexual femininity for centuries: artificiality, availability, variability, animatability, passivity, and submission. Third, I analyze a community of sex doll users, because these users are often left out of the scholarly literature on sex dolls and bots. Then, through a reading of HBO’s TV-series Westworld (2016), I propose a framework for thinking about sex robots that is rooted in the understanding of sexuality as a program, which I develop from Sara Ahmed’s notion of “compulsory heterosexuality as intentional functionality.” Finally, I argue that sex robots in their representation as an ideal woman companion points towards, and is a product of heteronormativity. Eluding this leads to an incomplete analysis of sex robots, and including it might lead to pleasurable deviant surprises

    Sexbots: sex slaves, vulnerable others or perfect partners?

    Get PDF
    This article describes how sexbots: sentient, self-aware, feeling artificial moral agents created soon as customised potential sexual/intimate partners provoke crucial questions for technoethics. Coeckelbergh's model of human/robotic relations as co-evolving to their mutual benefit through mutual vulnerability is applied to sexbots. As sexbots have a sustainable claim to moral standing, benefits and vulnerabilities inherent in human/sexbots relations must be identified and addressed for both parties. Humans' and sexbots' vulnerabilities are explored, drawing on the philosophy and social science of dehumanisation and inclusion/exclusion. This article argues humans as creators owe a duty of care to sentient beings they create. Responsible innovation practices involving stakeholders debating ethicolegal conundrums pertaining to human duties to sexbots, and sexbots' putative interests, rights and responsibilities are essential. These validate the legal recognition of sexbots, the protection of their interests through regulatory oversight and ethical limitations on customisation which must be put in place

    Animating the Ethical Demand:Exploring user dispositions in industry innovation cases through animation-based sketching

    Get PDF
    This paper addresses the challenge of attaining ethical user stances during the design process of products and services and proposes animation-based sketching as a design method, which supports elaborating and examining different ethical stances towards the user. The discussion is qualified by an empirical study of Responsible Research and Innovation (RRI) in a Triple Helix constellation. Using a three-week long innovation workshop, UCrAc, involving 16 Danish companies and organisations and 142 students as empirical data, we discuss how animation-based sketching can explore not yet existing user dispositions, as well as create an incentive for ethical conduct in development and innovation processes. The ethical fulcrum evolves around Løgstrup's Ethical Demand and his notion of spontaneous life manifestations. From this, three ethical stances are developed; apathy, sympathy and empathy. By exploring both apathetic and sympathetic views, the ethical reflections are more nuanced as a result of actually seeing the user experience simulated through different user dispositions. Exploring the three ethical stances by visualising real use cases with the technologies simulated as already being implemented makes the life manifestations of the users in context visible. We present and discuss how animation-based sketching can support the elaboration and examination of different ethical stances towards the user in the product and service development process. Finally we present a framework for creating narrative representations of emerging technology use cases, which invite to reflection upon the ethics of the user experience.</jats:p

    Care robots in residential homes for elderly people: an ethical examination of deception, care, and consent

    Get PDF
    We are facing a dire social problem: although life expectancy is increasing, time spent living independently is not, meaning that the eldercare sector is experiencing a worrying shortfall of nursing staff - a problem which is only getting worse. Robots designed for caring purposes – carebots – present a possible solution: they can perform some of the work which has been hitherto undertaken by human nurses. But their introduction is not without problems. This thesis examines some pertinent questions relating to the introduction of carebots into residential homes for elderly people. Chapter 1 examines what robots are, and provides a way in which we can differentiate between robots of different types, helping us to understand what ethical issues are at stake for different types of robot. Chapter 2 focuses on what deception consists of, and discusses why deception and lying are often seen as impermissible. Chapter 3 discusses different types of robo-deception, and analyses both the likelihood and the normative significance of their occurring. Chapter 4 is a study of a particular form of robo-deception, which I call fake compassion. This is when robots appear to care for patients when in fact they do not: I examine the extent to which this is morally problematic. Chapter 5 examines dignity: what it is, and why it is important. Chapter 6 focuses on consent: its importance in different spheres, and how consent-seeking can promote autonomy, bodily integrity, dignity, and trust. Chapter 7 builds on the previous two chapters, and demonstrates that it is ethically essential that carebots (and human nurses) obtain patients' consent prior to providing care, because failing to do so can reduce their dignity, and these reductions can be cumulative and devastating. This thesis is not merely an interesting thought experiment or a work of science fiction; rather, it is a real-world necessity that carebots take appropriate actions which promote the dignity and best interests of patients: our grandparents, parents, and in time, us and our descendants

    Care robots in residential homes for elderly people: an ethical examination of deception, care, and consent

    Get PDF
    We are facing a dire social problem: although life expectancy is increasing, time spent living independently is not, meaning that the eldercare sector is experiencing a worrying shortfall of nursing staff - a problem which is only getting worse. Robots designed for caring purposes – carebots – present a possible solution: they can perform some of the work which has been hitherto undertaken by human nurses. But their introduction is not without problems. This thesis examines some pertinent questions relating to the introduction of carebots into residential homes for elderly people. Chapter 1 examines what robots are, and provides a way in which we can differentiate between robots of different types, helping us to understand what ethical issues are at stake for different types of robot. Chapter 2 focuses on what deception consists of, and discusses why deception and lying are often seen as impermissible. Chapter 3 discusses different types of robo-deception, and analyses both the likelihood and the normative significance of their occurring. Chapter 4 is a study of a particular form of robo-deception, which I call fake compassion. This is when robots appear to care for patients when in fact they do not: I examine the extent to which this is morally problematic. Chapter 5 examines dignity: what it is, and why it is important. Chapter 6 focuses on consent: its importance in different spheres, and how consent-seeking can promote autonomy, bodily integrity, dignity, and trust. Chapter 7 builds on the previous two chapters, and demonstrates that it is ethically essential that carebots (and human nurses) obtain patients' consent prior to providing care, because failing to do so can reduce their dignity, and these reductions can be cumulative and devastating. This thesis is not merely an interesting thought experiment or a work of science fiction; rather, it is a real-world necessity that carebots take appropriate actions which promote the dignity and best interests of patients: our grandparents, parents, and in time, us and our descendants
    corecore