60,972 research outputs found
Practical Challenges in Explicit Ethical Machine Reasoning
We examine implemented systems for ethical machine reasoning with a view to identifying the practical challenges (as opposed to philosophical challenges) posed by the area. We identify a need for complex ethical machine reasoning not only to be multi-objective, proactive, and scrutable but that it must draw on heterogeneous evidential reasoning. We also argue that, in many cases, it needs to operate in real time and be verifiable. We propose a general architecture involving a declarative ethical arbiter which draws upon multiple evidential reasoners each responsible for a particular ethical feature of the system's environment. We claim that this architecture enables some separation of concerns among the practical challenges that ethical machine reasoning poses
Harnessing Higher-Order (Meta-)Logic to Represent and Reason with Complex Ethical Theories
The computer-mechanization of an ambitious explicit ethical theory, Gewirth's
Principle of Generic Consistency, is used to showcase an approach for
representing and reasoning with ethical theories exhibiting complex logical
features like alethic and deontic modalities, indexicals, higher-order
quantification, among others. Harnessing the high expressive power of Church's
type theory as a meta-logic to semantically embed a combination of quantified
non-classical logics, our work pushes existing boundaries in knowledge
representation and reasoning. We demonstrate that intuitive encodings of
complex ethical theories and their automation on the computer are no longer
antipodes.Comment: 14 page
Philosophical Signposts for Artificial Moral Agent Frameworks
This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency. At the very least, the said philosophical concepts may be treated as signposts for further research on how to truly account for the nature of Artificial Moral Agents
Responsible Autonomy
As intelligent systems are increasingly making decisions that directly affect
society, perhaps the most important upcoming research direction in AI is to
rethink the ethical implications of their actions. Means are needed to
integrate moral, societal and legal values with technological developments in
AI, both during the design process as well as part of the deliberation
algorithms employed by these systems. In this paper, we describe leading ethics
theories and propose alternative ways to ensure ethical behavior by artificial
systems. Given that ethics are dependent on the socio-cultural context and are
often only implicit in deliberation processes, methodologies are needed to
elicit the values held by designers and stakeholders, and to make these
explicit leading to better understanding and trust on artificial autonomous
systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence
The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms and Argumentation
An autonomous system is constructed by a manufacturer, operates in a society
subject to norms and laws, and is interacting with end users. All of these
actors are stakeholders affected by the behavior of the autonomous system. We
address the challenge of how the ethical views of such stakeholders can be
integrated in the behavior of the autonomous system. We propose an ethical
recommendation component, which we call Jiminy, that uses techniques from
normative systems and formal argumentation to reach moral agreements among
stakeholders. Jiminy represents the ethical views of each stakeholder by using
normative systems, and has three ways of resolving moral dilemmas involving the
opinions of the stakeholders. First, Jiminy considers how the arguments of the
stakeholders relate to one another, which may already resolve the dilemma.
Secondly, Jiminy combines the normative systems of the stakeholders such that
the combined expertise of the stakeholders may resolve the dilemma. Thirdly,
and only if these two other methods have failed, Jiminy uses context-sensitive
rules to decide which of the stakeholders take preference. At the abstract
level, these three methods are characterized by the addition of arguments, the
addition of attacks among arguments, and the removal of attacks among
arguments. We show how Jiminy can be used not only for ethical reasoning and
collaborative decision making, but also for providing explanations about
ethical behavior
Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support
A framework and methodology---termed LogiKEy---for the design and engineering
of ethical reasoners, normative theories and deontic logics is presented. The
overall motivation is the development of suitable means for the control and
governance of intelligent autonomous systems. LogiKEy's unifying formal
framework is based on semantical embeddings of deontic logics, logic
combinations and ethico-legal domain theories in expressive classic
higher-order logic (HOL). This meta-logical approach enables the provision of
powerful tool support in LogiKEy: off-the-shelf theorem provers and model
finders for HOL are assisting the LogiKEy designer of ethical intelligent
agents to flexibly experiment with underlying logics and their combinations,
with ethico-legal domain theories, and with concrete examples---all at the same
time. Continuous improvements of these off-the-shelf provers, without further
ado, leverage the reasoning performance in LogiKEy. Case studies, in which the
LogiKEy framework and methodology has been applied and tested, give evidence
that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure
Taking Turing by Surprise? Designing Digital Computers for morally-loaded contexts
There is much to learn from what Turing hastily dismissed as Lady Lovelace s
objection. Digital computers can indeed surprise us. Just like a piece of art,
algorithms can be designed in such a way as to lead us to question our
understanding of the world, or our place within it. Some humans do lose the
capacity to be surprised in that way. It might be fear, or it might be the
comfort of ideological certainties. As lazy normative animals, we do need to be
able to rely on authorities to simplify our reasoning: that is ok. Yet the
growing sophistication of systems designed to free us from the constraints of
normative engagement may take us past a point of no-return. What if, through
lack of normative exercise, our moral muscles became so atrophied as to leave
us unable to question our social practices? This paper makes two distinct
normative claims:
1. Decision-support systems should be designed with a view to regularly
jolting us out of our moral torpor.
2. Without the depth of habit to somatically anchor model certainty, a
computer s experience of something new is very different from that which in
humans gives rise to non-trivial surprises. This asymmetry has key
repercussions when it comes to the shape of ethical agency in artificial moral
agents. The worry is not just that they would be likely to leap morally ahead
of us, unencumbered by habits. The main reason to doubt that the moral
trajectories of humans v. autonomous systems might remain compatible stems from
the asymmetry in the mechanisms underlying moral change. Whereas in humans
surprises will continue to play an important role in waking us to the need for
moral change, cognitive processes will rule when it comes to machines. This
asymmetry will translate into increasingly different moral outlooks, to the
point of likely unintelligibility. The latter prospect is enough to doubt the
desirability of autonomous moral agents
Building Ethically Bounded AI
The more AI agents are deployed in scenarios with possibly unexpected
situations, the more they need to be flexible, adaptive, and creative in
achieving the goal we have given them. Thus, a certain level of freedom to
choose the best path to the goal is inherent in making AI robust and flexible
enough. At the same time, however, the pervasive deployment of AI in our life,
whether AI is autonomous or collaborating with humans, raises several ethical
challenges. AI agents should be aware and follow appropriate ethical principles
and should thus exhibit properties such as fairness or other virtues. These
ethical principles should define the boundaries of AI's freedom and creativity.
However, it is still a challenge to understand how to specify and reason with
ethical boundaries in AI agents and how to combine them appropriately with
subjective preferences and goal specifications. Some initial attempts employ
either a data-driven example-based approach for both, or a symbolic rule-based
approach for both. We envision a modular approach where any AI technique can be
used for any of these essential ingredients in decision making or decision
support systems, paired with a contextual approach to define their combination
and relative weight. In a world where neither humans nor AI systems work in
isolation, but are tightly interconnected, e.g., the Internet of Things, we
also envision a compositional approach to building ethically bounded AI, where
the ethical properties of each component can be fruitfully exploited to derive
those of the overall system. In this paper we define and motivate the notion of
ethically-bounded AI, we describe two concrete examples, and we outline some
outstanding challenges.Comment: Published at AAAI Blue Sky Track, winner of Blue Sky Awar
Artificial morality: Making of the artificial moral agents
Abstract:
Artificial Morality is a new, emerging interdisciplinary field that centres
around the idea of creating artificial moral agents, or AMAs, by implementing moral
competence in artificial systems. AMAs are ought to be autonomous agents capable of
socially correct judgements and ethically functional behaviour. This request for moral
machines comes from the changes in everyday practice, where artificial systems are being
frequently used in a variety of situations from home help and elderly care purposes to
banking and court algorithms. It is therefore important to create reliable and responsible
machines based on the same ethical principles that society demands from people. New
challenges in creating such agents appear. There are philosophical questions about a
machine’s potential to be an agent, or mora
l agent, in the first place. Then comes the
problem of social acceptance of such machines, regardless of their theoretic agency
status. As a result of efforts to resolve this problem, there are insinuations of needed
additional psychological (emotional and cogn
itive) competence in cold moral machines.
What makes this endeavour of developing AMAs even harder is the complexity of the
technical, engineering aspect of their creation. Implementation approaches such as top-
down, bottom-up and hybrid approach aim to find the best way of developing fully
moral agents, but they encounter their own problems throughout this effort
- …