57,772 research outputs found
Artificial morality: Making of the artificial moral agents
Abstract:
Artificial Morality is a new, emerging interdisciplinary field that centres
around the idea of creating artificial moral agents, or AMAs, by implementing moral
competence in artificial systems. AMAs are ought to be autonomous agents capable of
socially correct judgements and ethically functional behaviour. This request for moral
machines comes from the changes in everyday practice, where artificial systems are being
frequently used in a variety of situations from home help and elderly care purposes to
banking and court algorithms. It is therefore important to create reliable and responsible
machines based on the same ethical principles that society demands from people. New
challenges in creating such agents appear. There are philosophical questions about a
machineās potential to be an agent, or mora
l agent, in the first place. Then comes the
problem of social acceptance of such machines, regardless of their theoretic agency
status. As a result of efforts to resolve this problem, there are insinuations of needed
additional psychological (emotional and cogn
itive) competence in cold moral machines.
What makes this endeavour of developing AMAs even harder is the complexity of the
technical, engineering aspect of their creation. Implementation approaches such as top-
down, bottom-up and hybrid approach aim to find the best way of developing fully
moral agents, but they encounter their own problems throughout this effort
Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement Learning
Practical uses of Artificial Intelligence (AI) in the real world have
demonstrated the importance of embedding moral choices into intelligent agents.
They have also highlighted that defining top-down ethical constraints on AI
according to any one type of morality is extremely challenging and can pose
risks. A bottom-up learning approach may be more appropriate for studying and
developing ethical behavior in AI agents. In particular, we believe that an
interesting and insightful starting point is the analysis of emergent behavior
of Reinforcement Learning (RL) agents that act according to a predefined set of
moral rewards in social dilemmas.
In this work, we present a systematic analysis of the choices made by
intrinsically-motivated RL agents whose rewards are based on moral theories. We
aim to design reward structures that are simplified yet representative of a set
of key ethical systems. Therefore, we first define moral reward functions that
distinguish between consequence- and norm-based agents, between morality based
on societal norms or internal virtues, and between single- and mixed-virtue
(e.g., multi-objective) methodologies. Then, we evaluate our approach by
modeling repeated dyadic interactions between learning moral agents in three
iterated social dilemma games (Prisoner's Dilemma, Volunteer's Dilemma and Stag
Hunt). We analyze the impact of different types of morality on the emergence of
cooperation, defection or exploitation, and the corresponding social outcomes.
Finally, we discuss the implications of these findings for the development of
moral agents in artificial and mixed human-AI societies.Comment: 7 pages, currently under review for a conferenc
Artificial morality: Making of the artificial moral agents
Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and court algorithms. It is therefore important to create reliable and responsible machines based on the same ethical principles that society demands from people. New challenges in creating such agents appear. There are philosophical questions about a machine's potential to be an agent, or moral agent, in the first place. Then comes the problem of social acceptance of such machines, regardless of their theoretic agency status. As a result of efforts to resolve this problem, there are insinuations of needed additional psychological (emotional and cognitive) competence in cold moral machines. What makes this endeavour of developing AMAs even harder is the complexity of the technical, engineering aspect of their creation. Implementation approaches such as top-down, bottom-up and hybrid approach aim to find the best way of developing fully moral agents, but they encounter their own problems throughout this effort
Delegating and Distributing Morality: Can We Inscribe Privacy Protection in a Machine?
This paper addresses the question of delegation of morality to a machine, through a consideration of whether or not non-humans can be considered to be moral. The aspect of morality under consideration here is protection of privacy. The topic is introduced through two cases where there was a failure in sharing and retaining personal data protected by UK data protection law, with tragic consequences. In some sense this can be regarded as a failure in the process of delegating morality to a computer database. In the UK, the issues that these cases raise have resulted in legislation designed to protect children which allows for the creation of a huge database for children. Paradoxically, we have the situation where we failed to use digital data in enforcing the law to protect children, yet we may now rely heavily on digital technologies to care for children. I draw on the work of Floridi, Sanders, Collins, Kusch, Latour and Akrich, a spectrum of work stretching from philosophy to sociology of technology and the āseamless webā or āactorānetworkā approach to studies of technology. Intentionality is considered, but not deemed necessary for meaningful moral behaviour. Floridiās and Sandersā concept of ādistributed moralityā accords with the network of agency characterized by actorānetwork approaches. The paper concludes that enfranchizing non-humans, in the shape of computer databases of personal data, as moral agents is not necessarily problematic but a balance of delegation of morality must be made between human and non-human actors
Building machines that learn and think about morality
Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also discuss how work in embodied and situated cognition could provide a valu- able perspective on future research
A Case for Machine Ethics in Modeling Human-Level Intelligent Agents
This paper focuses on the research field of machine ethics and how it relates to a technological singularityāa hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts
Philosophical Signposts for Artificial Moral Agent Frameworks
This article focuses on a particular issue under machine ethicsāthat is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency. At the very least, the said philosophical concepts may be treated as signposts for further research on how to truly account for the nature of Artificial Moral Agents
The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms and Argumentation
An autonomous system is constructed by a manufacturer, operates in a society
subject to norms and laws, and is interacting with end users. All of these
actors are stakeholders affected by the behavior of the autonomous system. We
address the challenge of how the ethical views of such stakeholders can be
integrated in the behavior of the autonomous system. We propose an ethical
recommendation component, which we call Jiminy, that uses techniques from
normative systems and formal argumentation to reach moral agreements among
stakeholders. Jiminy represents the ethical views of each stakeholder by using
normative systems, and has three ways of resolving moral dilemmas involving the
opinions of the stakeholders. First, Jiminy considers how the arguments of the
stakeholders relate to one another, which may already resolve the dilemma.
Secondly, Jiminy combines the normative systems of the stakeholders such that
the combined expertise of the stakeholders may resolve the dilemma. Thirdly,
and only if these two other methods have failed, Jiminy uses context-sensitive
rules to decide which of the stakeholders take preference. At the abstract
level, these three methods are characterized by the addition of arguments, the
addition of attacks among arguments, and the removal of attacks among
arguments. We show how Jiminy can be used not only for ethical reasoning and
collaborative decision making, but also for providing explanations about
ethical behavior
- ā¦