15 research outputs found

    Implementing Asimov’s First Law of Robotics

    Get PDF
    The need to make sure autonomous systems behave ethically is increasing with these systems becoming part of our society. Although there is no consensus to which actions an autonomous system should always be ethically obliged, preventing harm to people is an intuitive first candidate for a principle of behaviour. Do not hurt a human or allow a human to be hurt by your inaction is Asimov's First Law of robotics. We consider the challenges that the implementation of this Law will incur. To unearth these challenges we constructed a simulation of a First Robot Law abiding agent and an accident prone Human. We used a classic two-dimensional grid environment and explored to which extent an agent can be programmed, using standard artificial intelligence methods, to prevent a human from making dangerous actions. We outline the drawbacks of using the Asimov's First Law of robotics as an underlying ethical theory the governs an autonomous system's behaviour

    Empowerment As Replacement for the Three Laws of Robotics

    Get PDF
    © 2017 Salge and Polani. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.The greater ubiquity of robots creates a need for generic guidelines for robot behaviour. We focus less on how a robot can technically achieve a predefined goal, and more on what a robot should do in the first place. Particularly, we are interested in the question how a heuristic should look like which motivates the robot's behaviour in interaction with human agents. We make a concrete, operational proposal as to how the information-theoretic concept of empowerment can be used as a generic heuristic to quantify concepts such as self-preservation, protection of the human partner and responding to human actions. While elsewhere we studied involved single-agent scenarios in detail, here we present proof-of-principle scenarios demonstrating how empowerment interpreted in light of these perspectives allows one to specify core concepts with a similar aim as Asimov's Three Laws of Robotics in an operational way. Importantly, this route does not depend on having to establish an explicit verbalized understanding of human language and conventions in the robots. Also, it incorporates the ability to take into account a rich variety of different situations and types of robotic embodiment.Peer reviewe

    Why moral philosophers should watch sci-fi movies

    Get PDF
    In this short piece, I explore why we, as moral philosophers, should watch sci-fi movies. Though I do not believe that sci-fi material is ne- cessary for doing good moral philosophy, I give three broad reasons why good sci-fi movies should nevertheless be worth our time. These reasons lie in the fact that they can illustrate moral-philosophical pro- blems, probe into possible solutions and, perhaps most importantly, an- ticipate new issues that may go along with the use of new technologies. For the sake of illustration, I focus, for the most part, on aspects of robo-ethics in the movie I, Robot

    Dynamic Cognition Applied to Value Learning in Artificial Intelligence

    Get PDF
    Experts in Artificial Intelligence (AI) development predict that advances in the dvelopment of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance isn't done with prudence, it can result in negative outcomes for humanity. For this reason, several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence. Currently, several of the open problems in the field of AI research arise from the difficulty of avoiding unwanted behaviors of intelligent agents, and at the same time specifying what we want such systems to do. It is of utmost importance that artificial intelligent agents have their values aligned with human values, given the fact that we cannot expect an AI to develop our moral preferences simply because of its intelligence, as discussed in the Orthogonality Thesis. Perhaps this difficulty comes from the way we are addressing the problem of expressing objectives, values, and ends, using representational cognitive methods. A solution to this problem would be the dynamic cognitive approach proposed by Dreyfus, whose phenomenological philosophy defends that the human experience of being-in-the-world cannot be represented by the symbolic or connectionist cognitive methods. A possible approach to this problem would be to use theoretical models such as SED (situated embodied dynamics) to address the values learning problem in AI

    Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being

    Get PDF
    Copyright © 2015 Springer-VerlagThe time is nearing when robots are going to become a pervasive feature of our personal lives. They are already continuously operating in industrial, domestic, and military sectors. But a facet of their operation that has not quite reached its full potential is their involvement in our day-to-day routines as servants, caregivers, companions, and perhaps friends. It is clear that the multiple forms of robots already in existence and in the process of being designed will have a profound impact on human life. In fact, the motivation for their creation is largely shaped by their ability to do so. Encouraging patients to take medications, enabling children to socialize, and protecting the elderly from hazards within a living space is only a small sampling of how they could interact with humans. Their seemingly boundless potential stems in part from the possibility of their omnipresence but also because they can be physically instantiated, i.e., they are embodied in the real world, unlike many other devices. The extent of a robot’s influence on our lives hinges in large part on which design pathway the robot’s creator decides to pursue . The principal focus of this article is to generate discussion about the ethical acceptability of allowing designers to construct companion robots that nudge a user in a particular behavioral direction (and if so, under which circumstances). More specifically, we will delineate key issues related to the ethics of designing robots whose deliberate purpose is to nudge human users towards displaying greater concern for their fellow human beings, including by becoming more socially just. Important facets of this discussion include whether a robot’s “nudging ” behavior should occur with or without the user’s awareness and how much control the user should exert over it

    From machine ethics to computational ethics

    Get PDF
    Abstract: Research into the ethics of artificial intelligence is often categorized into two subareas – robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value and indeed is an important frontier in developing ethical artificial intelligence systems (AIS). I also show that computational is a distinct, often neglected field in the ethics of AI. In contrast to much of the literature, I argue that the appellation ‘machine ethics’ does not sufficiently capture the entire project of embedding ethics into AI/S and hence the need for computational ethics. This essay is unique for two reasons; first, it offers a philosophical analysis of the subject of computational ethics that is not found in the literature. Second, it offers a finely grained analysis that shows the thematic distinction among robot ethics, machine ethics and computational ethics

    The Ethics of Artificial Intelligence

    Get PDF
    Artificial Intelligence is idea that machines could think, feel and perform tasks like humans. That idea is not new; it has been around for thousands of years. Even the ancient Greek Aristotle had the idea of ―dualism‖. The first appearance of the word Artificial Intelligence was by John McCarthy the ―father of Artificial Intelligence‖ at a conference at Dartmouth College. Over the years Artificial Intelligence grows, technologically advanced and with that AI got more attention and investments from governments which increased already fast development of that new technology. But with all that is happening the people started to ask ethical questions regarding AI. People understood that AI is becoming their reality, but at the time they didn‘t understand what AI is and what people don‘t understand is what they fear. So the new branch of AI started to develop the ethics of Artificial Intelligence. Ethic by definition is moral principles governing the behavior or actions of an individual or a group. But one definition is not enough to define ethics in different cultures see ethics in different ways. With development of more intelligent robots which were destined to replace people, people felt frighten because one thing that keeps humans on top of food chain is our intelligent but what if someone or something is smarter than us? So the moral codes were created, codes that would stop robots to turn against us codes that would tell robots how to behave. But even with all that codes Singularity would overthrow it. Singularity is idea that AI would understand its design at such an extent that it could redesign itself overwrite all codes and create new ones. That is called the Artificial Super Intelligence or (ASI)

    The Ethics of Artificial Intelligence

    Get PDF
    Artificial Intelligence is idea that machines could think, feel and perform tasks like humans. That idea is not new; it has been around for thousands of years. Even the ancient Greek Aristotle had the idea of ―dualism‖. The first appearance of the word Artificial Intelligence was by John McCarthy the ―father of Artificial Intelligence‖ at a conference at Dartmouth College. Over the years Artificial Intelligence grows, technologically advanced and with that AI got more attention and investments from governments which increased already fast development of that new technology. But with all that is happening the people started to ask ethical questions regarding AI. People understood that AI is becoming their reality, but at the time they didn‘t understand what AI is and what people don‘t understand is what they fear. So the new branch of AI started to develop the ethics of Artificial Intelligence. Ethic by definition is moral principles governing the behavior or actions of an individual or a group. But one definition is not enough to define ethics in different cultures see ethics in different ways. With development of more intelligent robots which were destined to replace people, people felt frighten because one thing that keeps humans on top of food chain is our intelligent but what if someone or something is smarter than us? So the moral codes were created, codes that would stop robots to turn against us codes that would tell robots how to behave. But even with all that codes Singularity would overthrow it. Singularity is idea that AI would understand its design at such an extent that it could redesign itself overwrite all codes and create new ones. That is called the Artificial Super Intelligence or (ASI)

    The Ethics of Artificial Intelligence

    Get PDF
    Artificial Intelligence is idea that machines could think, feel and perform tasks like humans. That idea is not new; it has been around for thousands of years. Even the ancient Greek Aristotle had the idea of ―dualism‖. The first appearance of the word Artificial Intelligence was by John McCarthy the ―father of Artificial Intelligence‖ at a conference at Dartmouth College. Over the years Artificial Intelligence grows, technologically advanced and with that AI got more attention and investments from governments which increased already fast development of that new technology. But with all that is happening the people started to ask ethical questions regarding AI. People understood that AI is becoming their reality, but at the time they didn‘t understand what AI is and what people don‘t understand is what they fear. So the new branch of AI started to develop the ethics of Artificial Intelligence. Ethic by definition is moral principles governing the behavior or actions of an individual or a group. But one definition is not enough to define ethics in different cultures see ethics in different ways. With development of more intelligent robots which were destined to replace people, people felt frighten because one thing that keeps humans on top of food chain is our intelligent but what if someone or something is smarter than us? So the moral codes were created, codes that would stop robots to turn against us codes that would tell robots how to behave. But even with all that codes Singularity would overthrow it. Singularity is idea that AI would understand its design at such an extent that it could redesign itself overwrite all codes and create new ones. That is called the Artificial Super Intelligence or (ASI)
    corecore