18 research outputs found

    A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents

    Get PDF
    Recently there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of Artificial General Intelligence, or AGI. Moral decision making is arguably one of the most challenging tasks for computational approaches to higher order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics or Friendly AI. In this paper we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global Workspace Theory (GWT), proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin et al. 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent’s selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision making process, and elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors

    Revealing the ‘face’ of the robot introducting the ethics of Levinas to the field of robo-ethics

    Get PDF
    This paper explore the possibility of a new philosophical turn in robot-ethics, considering whether the concepts of Emanuel Levinas particularly his conception of the ‘face of the other’ can be used to understand how non-expert users interact with robots. The term ‘Robot’ comes from fiction and for non-experts and experts alike interaction with robots may be coloured by this history. This paper explores the ethics of robots (and the use of the term robot) that is based on the user seeing the robot as infinitely complex

    Toward machines that behave ethically better than humans do

    Get PDF
    With the increasing dependence on autonomous operating agents and robots the need for ethical machine behavior rises. This paper presents a moral reasoner that combines connectionism, utilitarianism and ethical theory about moral duties. The moral decision-making matches the analysis of expert ethicists in the health domain. This may be useful in many applications, especially where machines interact with humans in a medical context. Additionally, when connected to a cognitive model of emotional intelligence and affective decision making, it can be explored how moral decision making impacts affective behavior

    Towards ethical framework for personal care robots:Review and reflection

    Get PDF
    In recent decades, robots have been used noticeably at various industries. Autonomous robots have been embedded in human lives especially in elderly and disabled lives. Elderly population is growing worldwide significantly; therefore there is an increased need of personal care robots to enhance mobility and to promote independence. A great number of aging and disabled hold appeals for using robots in daily routine tasks as well as for various healthcare matters. It is essential to follow a proper framework in ethics of robot design to fulfill individual needs, whilst considering potential harmful effects of robots. This paper primarily focuses on the existing issues in robot ethics including general ethics theories and ethics frameworks for robots. Consequentialism ethics will be recommended to be applied in robot ethics frameworks

    Design of an Immersive Virtual Environment to Investigate How Different Drivers Crash in Trolley-Problem Scenarios

    Get PDF
    abstract: The Autonomous Vehicle (AV), also known as self-driving car, promises to be a game changer for the transportation industry. This technology is predicted to drastically reduce the number of traffic fatalities due to human error [21]. However, road driving at any reasonable speed involves some risks. Therefore, even with high-tech AV algorithms and sophisticated sensors, there may be unavoidable crashes due to imperfection of the AV systems, or unexpected encounters with wildlife, children and pedestrians. Whenever there is a risk involved, there is the need for an ethical decision to be made [33]. While ethical and moral decision-making in humans has long been studied by experts, the advent of artificial intelligence (AI) also calls for machine ethics. To study the different moral and ethical decisions made by humans, experts may use the Trolley Problem [34], which is a scenario where one must pull a switch near a trolley track to redirect the trolley to kill one person on the track or do nothing, which will result in the deaths of five people. While it is important to take into account the input of members of a society and perform studies to understand how humans crash during unavoidable accidents to help program moral and ethical decision-making into self-driving cars, using the classical trolley problem is not ideal, as it is unrealistic and does not represent moral situations that people face in the real world. This work seeks to increase the realism of the classical trolley problem for use in studies on moral and ethical decision-making by simulating realistic driving conditions in an immersive virtual environment with unavoidable crash scenarios, to investigate how drivers crash during these scenarios. Chapter 1 gives an in-depth background into autonomous vehicles and relevant ethical and moral problems; Chapter 2 describes current state-of-the-art online tools and simulators that were developed to study moral decision-making during unavoidable crashes. Chapters 3 focuses on building the simulator and the design of the crash scenarios. Chapter 4 describes human subjects experiments that were conducted with the simulator and their results, and Chapter 5 provides conclusions and avenues for future work.Dissertation/ThesisMasters Thesis Mechanical Engineering 201

    Machine ethics: stato dell’arte e osservazioni critiche

    Get PDF
    This essay aims to introduce the field of study known as machine ethics, viz. the aspect of the ethics of artificial intelligence concerned with the moral behavior of AI systems. I discuss the present potential of this technology and put forward some ethical considerations about the benefits and perils deriving from the development of this field of research. There is an urge of debate, considering that there is an increase in machine usage in ethically-sensitive domains, to ensure that future technology becomes advantageous for humans and their well-being.This essay aims to introduce the field of study known as machine ethics, viz. the aspect of the ethics of artificial intelligence concerned with the moral behavior of AI systems. I discuss the present potential of this technology and put forward some ethical considerations about the benefits and perils deriving from the development of this field of research. There is an urge of debate, considering that there is an increase in machine usage in ethically-sensitive domains, to ensure that future technology becomes advantageous for humans and their well-being

    Building TrusTee:The world's most trusted robot

    Get PDF
    This essay explores the requirements for building trustworthy robots and artificial intelligence by drawing from various scientific disciplines and taking human values as the starting-point. It also presents a research and impact agenda

    From machine ethics to computational ethics

    Get PDF
    Abstract: Research into the ethics of artificial intelligence is often categorized into two subareas – robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value and indeed is an important frontier in developing ethical artificial intelligence systems (AIS). I also show that computational is a distinct, often neglected field in the ethics of AI. In contrast to much of the literature, I argue that the appellation ‘machine ethics’ does not sufficiently capture the entire project of embedding ethics into AI/S and hence the need for computational ethics. This essay is unique for two reasons; first, it offers a philosophical analysis of the subject of computational ethics that is not found in the literature. Second, it offers a finely grained analysis that shows the thematic distinction among robot ethics, machine ethics and computational ethics
    corecore