7 research outputs found

    “How could you even ask that?”: Moral considerability, uncertainty and vulnerability in social robotics

    Get PDF
    When it comes to social robotics (robots that engage human social responses via “eyes” and other facial features, voice-based natural-language interactions, and even evocative movements), ethicists, particularly in European and North American traditions, are divided over whether and why they might be morally considerable. Some argue that moral considerability is based on internal psychological states like consciousness and sentience, and debate about thresholds of such features sufficient for ethical consideration, a move sometimes criticized for being overly dualistic in its framing of mind versus body. Others, meanwhile, focus on the effects of these robots on human beings, arguing that psychological impact alone can qualify an entity for moral status. What both sides overlook is the importance for ordinary moral reasoning of integrating questions about an entity’s “inner life,” and its psychological effect on us. Turning to accounts of relationships in virtue ethics, especially those of the Confucian tradition, we find a more nuanced theory that can provide complex guidance on the moral considerability of social robots, including ethical considerations about whether and how to question this to begin with

    Human Supremacy as Posthuman Risk

    Get PDF
    Human supremacy is the widely held view that human interests ought to be privileged over other interests as a matter of public policy. Posthumanism is an historical and cultural situation characterized by a critical reevaluation of anthropocentrist theory and practice. This paper draws on Rosi Braidotti’s critical posthumanism and the critique of ideal theory in Charles Mills and Serene Khader to address the use of human supremacist rhetoric in AI ethics and policy discussions, particularly in the work of Joanna Bryson. This analysis leads to identifying a set of risks posed by human supremacist policy in a posthuman context, specifically involving the classification of agents by type

    Human supremacy as posthuman risk

    Get PDF
    Human supremacy is the widely held view that human interests ought to be privileged over other interests as a matter of ethics and public policy. Posthumanism is the historical situation characterized by a critical reevaluation of anthropocentrist theory and practice. This paper draws on animal studies, critical posthumanism, and the critique of ideal theory in Charles Mills and Serene Khader to address the appeal to human supremacist rhetoric in AI ethics and policy discussions, particularly in the work of Joanna Bryson. This analysis identifies a specific risk posed by human supremacist policy in a posthuman context, namely the classification of agents by type

    ’How could you even ask that?’ Moral considerability, uncertainty and vulnerability in social robotics

    Get PDF
    When it comes to social robotics (robots that engage human social responses via “eyes” and other facial features, voice-based natural-language interactions, and even evocative movements), ethicists, particularly in European and North American traditions, are divided over whether and why they might be morally considerable. Some argue that moral considerability is based on internal psychological states like consciousness and sentience, and debate about thresholds of such features sufficient for ethical consideration, a move sometimes criticized for being overly dualistic in its framing of mind versus body. Others, meanwhile, focus on the effects of these robots on human beings, arguing that psychological impact alone can qualify an entity for moral status. What both sides overlook is the importance for ordinary moral reasoning of integrating questions about an entity’s “inner life,” and its psychological effect on us. Turning to accounts of relationships in virtue ethics, especially those of the Confucian tradition, we find a more nuanced theory that can provide complex guidance on the moral considerability of social robots, including ethical considerations about whether and how to question this to begin with
    corecore