25 research outputs found

    Sexual Robots: The Social-Relational Approach and the Concept of Subjective Reference

    Get PDF
    In this paper we propose the notion of “subjective reference” as a conceptual tool that explains how and why human-robot sexual interactions could reframe users approach to human-human sexual interactions. First, we introduce the current debate about Sexual Robotics, situated in the wider discussion about Social Robots, stating the urgency of a regulative framework. We underline the importance of a social-relational approach, mostly concerned about Social Robots impact in human social structures. Then, we point out the absence of a precise framework conceptualizing why Social Robots, and Sexual Robots in particular, may modify users’ sociality and relationality. Within a psychological framework, we propose to consider Sexual Robots as “subjective references”, namely objects symbolically referring to human subjects: we claim that, for the user experience, every action performed upon a Sexual Robot is symbolically directed toward a human subject, including degrading and violent practices. This shifting mechanism may transfer the user relational setting from human-robot interactions to human-human interactions

    Drones, Morality, and Vulnerability: Two Arguments Against Automated Killing

    No full text
    This chapter articulates and discusses several arguments against the lethal use of unmanned aerial vehicles, often called drones. A distinction is made between targeted killing, killing at a distance, and automated killing, which is used to map the arguments against lethal drones. After considering issues concerning the justification of war, the argument that targeted killing makes it easier to start a war, and the argument that killing at a distance is problematic, this chapter focuses on two arguments against automated killing, which are relevant to all kinds of “machine killing”. The first argument (from moral agency) questions if machines can ever be moral agents and is based on differences in capacities for moral decision-making between humans and machines. The second argument (from moral patiency), which has received far less attention in the literature on machine ethics and ethics of drones, focuses on the question if machines can ever be “moral patients”. It is argued that there is a morally significant qualitative difference in vulnerability and way of being between drones and humans, and that because of this asymmetry fully automated killing without or with little human involvement is not justified
    corecore