27 research outputs found

    Should Robots Be Like Humans? A Pragmatic Approach to Social Robotics

    Get PDF
    This paper describes the instrumentalizing aspects of social robots, which generate the term pragmatic social robot. In contrast to humanoid robots, pragmatic social robots (PSRs) are defined by their instrumentalizing aspects, which consist of language, skill, and artificial intelligence. These technical aspects of social robots have led to the tendency to attribute a selfhood characteristic or anthropomorphism. Anthropomorphism can raise problems of responsibility and the ontological problems of human-technology relations. As a result, there is an antinomy in the research and development of pragmatic social robotics, considering that they are expected to achieve similarity with humans in terms of completing works. How can we avoid anthropomorphism in the research and development of PSRs while ensuring their flexibility? In response to this issue, I suggest intuition should be instrumentalized to advance PSRs’ social skills. Intuition, as theorized by Henry Bergson and Efraim Fischbein, overcomes the capacity of logical analysis to solve problems. Robots should be like humans in the sense that their instrumentalizing aspects meet the criteria for the value of human social skills

    Human Supremacy as Posthuman Risk

    Get PDF
    Human supremacy is the widely held view that human interests ought to be privileged over other interests as a matter of public policy. Posthumanism is an historical and cultural situation characterized by a critical reevaluation of anthropocentrist theory and practice. This paper draws on Rosi Braidotti’s critical posthumanism and the critique of ideal theory in Charles Mills and Serene Khader to address the use of human supremacist rhetoric in AI ethics and policy discussions, particularly in the work of Joanna Bryson. This analysis leads to identifying a set of risks posed by human supremacist policy in a posthuman context, specifically involving the classification of agents by type

    Ethics of Socially Disruptive Technologies:An Introduction

    Get PDF
    Technologies shape who we are, how we organize our societies and how we relate to nature. For example, social media challenges democracy; artificial intelligence raises the question of what is unique to humans; and the possibility to create artificial wombs may affect notions of motherhood and birth. Some have suggested that we address global warming by engineering the climate, but how does this impact our responsibility to future generations and our relation to nature?This book shows how technologies can be socially and conceptually disruptive and investigates how to come to terms with this disruptive potential.Four technologies are studied: social media, social robots, climate engineering and artificial wombs. The authors highlight the disruptive potential of these technologies, and the new questions this raises. The book also discusses responses to conceptual disruption, like conceptual engineering, the deliberate revision of concepts

    Online Dispute Resolution: Stinky, Repugnant, or Drab?

    Get PDF

    ARTIFICIAL INTELLIGENCE, LLC: CORPORATE PERSONHOOD AS TORT REFORM

    Get PDF
    Our legal system has long tried to fit the square peg of artificial intelligence (AI) technologies into the round hole of the current tort regime, overlooking the inability of traditional liability schemes to address the nuances of how AI technology creates harms. The current tort regime deals out rough justice—using strict liability for some AI products and using the negligence rule for other AI services—both of which are insufficiently tailored to achieve public policy objectives. Under a strict liability regime where manufacturers are always held liable for the faults of their technology regardless of knowledge or precautionary measures, firms are incentivized to play it safe and stifle innovation. But even with this cautionary stance, the goals of strict liability cannot be met due to the unique nature of AI technology: its mistakes are merely “efficient errors”—they appropriately surpass the human baseline, they are game theory problems intended for a jury, they are necessary to train a robust system, or they are harmless but misclassified. Under a negligence liability regime where the onus falls entirely on consumers to prove the element of causation, victimized consumers must surmount the difficult hurdle of tracing the vectors of causation through the “black box” of algorithms. Unable to do so, many are left without sufficient recourse or compensation

    The ethics of pet robots in dementia care settings: Care professionals’ and organisational leaders’ ethical intuitions

    Get PDF
    BackgroundPet robots are gaining momentum as a technology-based intervention to support the psychosocial wellbeing of people with dementia. Current research suggests that they can reduce agitation, improve mood and social engagement. The implementation of pet robots in care for persons with dementia raises several ethical debates. However, there is a paucity of empirical evidence to uncover care providers’ ethical intuitions, defined as individuals’ fundamental moral knowledge that are not underpinned by any specific propositions.ObjectivesExplore care professionals’ and organisational leaders’ ethical intuitions before and when implementing pet robots in nursing homes for routine dementia care.Materials and methodsWe undertook a secondary qualitative analysis of data generated from in-depth, semi-structured interviews with 22 care professionals and organisational leaders from eight nursing homes in Ireland. Data were analysed using reflexive thematic analysis. Ethical constructs derived from a comprehensive review of argument-based ethics literature were used to guide the deductive coding of concepts. An inductive approach was used to generate open codes not falling within the pre-existing concepts.FindingsEthical intuitions for implementing pet robots manifested at three levels: an (1) individual-relational, (2) organisational and (3) societal level. At the individual-relational level, ethical intuitions involved supporting the autonomy of residents and care providers, using the robots to alleviate residents’ social isolation, and the physical and psychosocial impacts associated with their use. Some care providers had differing sentiments about anthropomorphizing pet robots. At the organisational level, intuitions related to the use of pet robots to relieve care provision, changes to the organisational workflow, and varying extents of openness amongst care providers to use technological innovations. At the societal level, intuitions pertained conceptions of dementia care in nursing homes, and social justice relating to the affordability and availability of pet robots. Discrepancies between participants’ ethical intuitions and existing philosophical arguments were uncovered.ConclusionCare professionals and organisational leaders had different opinions on how pet robots are or should be implemented for residents with dementia. Future research should consider involving care practitioners, people with dementia, and their family members in the ethics dialogue to support the sustainable, ethical use of pet robots in practice

    Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds With Robots

    Get PDF
    Robots as social companions in close proximity to humans have a strong potential of becoming more and more prevalent in the coming years, especially in the realms of elder day care, child rearing, and education. As human beings, we have the fascinating ability to emotionally bond with various counterparts, not exclusively with other human beings, but also with animals, plants, and sometimes even objects. Therefore, we need to answer the fundamental ethical questions that concern human-robot-interactions per se, and we need to address how we conceive of "good lives", as more and more of the aspects of our daily lives will be interwoven with social robots

    Robotics in Germany and Japan

    Get PDF
    This book comprehends an intercultural and interdisciplinary framework including current research fields like Roboethics, Hermeneutics of Technologies, Technology Assessment, Robotics in Japanese Popular Culture and Music Robots. Contributions on cultural interrelations, technical visions and essays are rounding out the content of this book

    AI and sex robots : an examination of the technologization of sexuality

    Get PDF
    116 leaves ; 29 cmIncludes abstract.Includes bibliographical references (leaves 102-116).The emergence of sex robots with rudimentary but slowly advancing AI is a relatively new phenomenon, and there is little understanding about what the possible implications of these automatons could be, particularly regarding their impact on human sexuality. Drawing on a number of media sources, this paper looks at AI and sex robots, specifically how they function as a means by which a certain kind of sexuality is constructed and maintained. The specific sources are the movies A.I. Artificial Intelligence (2001), Ex Machina (2014), and Blade Runner (2007), and the television adaptation of Westworld (2016). I look at this relationship of sexuality and technology and the ways it has shaped not only how each of these categories are viewed in relation to each other, but also the ways that they have had a direct impact on the development of social practices. Specifically, I am interested in the ways sex robots could be used to reinforce harmful gender stereotypes about women, and lead to sexual violence. This project aims to fill this gap in the literature around the topic of sex robots, with the approach being theoretical in nature due to the lack of empirical data on the subject
    corecore