13,307 research outputs found

    Robot Betrayal: a guide to the ethics of robotic deception

    Get PDF
    If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in order to think clearly about its ethics. Second, it argues that the second type of deception – superficial state deception – is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use

    Affect and believability in game characters:a review of the use of affective computing in games

    Get PDF
    Virtual agents are important in many digital environments. Designing a character that highly engages users in terms of interaction is an intricate task constrained by many requirements. One aspect that has gained more attention recently is the effective dimension of the agent. Several studies have addressed the possibility of developing an affect-aware system for a better user experience. Particularly in games, including emotional and social features in NPCs adds depth to the characters, enriches interaction possibilities, and combined with the basic level of competence, creates a more appealing game. Design requirements for emotionally intelligent NPCs differ from general autonomous agents with the main goal being a stronger player-agent relationship as opposed to problem solving and goal assessment. Nevertheless, deploying an affective module into NPCs adds to the complexity of the architecture and constraints. In addition, using such composite NPC in games seems beyond current technology, despite some brave attempts. However, a MARPO-type modular architecture would seem a useful starting point for adding emotions

    Assistive robotics: research challenges and ethics education initiatives

    Get PDF
    Assistive robotics is a fast growing field aimed at helping healthcarers in hospitals, rehabilitation centers and nursery homes, as well as empowering people with reduced mobility at home, so that they can autonomously fulfill their daily living activities. The need to function in dynamic human-centered environments poses new research challenges: robotic assistants need to have friendly interfaces, be highly adaptable and customizable, very compliant and intrinsically safe to people, as well as able to handle deformable materials. Besides technical challenges, assistive robotics raises also ethical defies, which have led to the emergence of a new discipline: Roboethics. Several institutions are developing regulations and standards, and many ethics education initiatives include contents on human-robot interaction and human dignity in assistive situations. In this paper, the state of the art in assistive robotics is briefly reviewed, and educational materials from a university course on Ethics in Social Robotics and AI focusing on the assistive context are presented.Peer ReviewedPostprint (author's final draft

    Artificial Companions with Personality and Social Role

    No full text
    Subtitle: "Expectations from Users on the Design of Groups of Companions"International audienceRobots and virtual characters are becoming increasingly used in our everyday life. Yet, they are still far from being able to maintain long-term social relationships with users. It also remains unclear what future users will expect from these so-called "artificial companions" in terms of social roles and personality. These questions are of importance because users will be surrounded with multiple artificial companions. These issues of social roles and personality among a group of companions are sledom tackled in user studies. In this paper, we describe a study in which 94 participants reported that social roles and personalities they would expect from groups of companions. We explain how the resulsts give insights for the design of future groups of companions endowed with social intelligence

    The Philosophical Case for Robot Friendship

    Get PDF
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered our virtue friends - that to do so is philosophically reasonable. Furthermore, I argue that even if you do not think that robots can be our virtue friends, they can fulfil other important friendship roles, and can complement and enhance the virtue friendships between human beings

    The Incorporation of Moral-Development Language for Machine-Learning Companion Robots

    Get PDF
    Among the ongoing debates over ethical implications of artificial-intelligence development and applications, AI morality, and the nature of autonomous agency for robots, how to think about the moral assumptions implicit in machine-learning capacities for so-called companion robots is arguably an urgent one. This project links the development of machine-learning algorithmic design with moral-development theory language. It argues that robotic algorithmic responses should incorporate language linked to higher-order moral reasoning, reflecting notions of universal respect, community obligation and justice to encourage similar deliberation among human subjects
    • 

    corecore