3,645 research outputs found
Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents
In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up approaches to the development of machine ethics are presented and briefly challenged
Building Ethically Bounded AI
The more AI agents are deployed in scenarios with possibly unexpected
situations, the more they need to be flexible, adaptive, and creative in
achieving the goal we have given them. Thus, a certain level of freedom to
choose the best path to the goal is inherent in making AI robust and flexible
enough. At the same time, however, the pervasive deployment of AI in our life,
whether AI is autonomous or collaborating with humans, raises several ethical
challenges. AI agents should be aware and follow appropriate ethical principles
and should thus exhibit properties such as fairness or other virtues. These
ethical principles should define the boundaries of AI's freedom and creativity.
However, it is still a challenge to understand how to specify and reason with
ethical boundaries in AI agents and how to combine them appropriately with
subjective preferences and goal specifications. Some initial attempts employ
either a data-driven example-based approach for both, or a symbolic rule-based
approach for both. We envision a modular approach where any AI technique can be
used for any of these essential ingredients in decision making or decision
support systems, paired with a contextual approach to define their combination
and relative weight. In a world where neither humans nor AI systems work in
isolation, but are tightly interconnected, e.g., the Internet of Things, we
also envision a compositional approach to building ethically bounded AI, where
the ethical properties of each component can be fruitfully exploited to derive
those of the overall system. In this paper we define and motivate the notion of
ethically-bounded AI, we describe two concrete examples, and we outline some
outstanding challenges.Comment: Published at AAAI Blue Sky Track, winner of Blue Sky Awar
Preserving a combat commander’s moral agency: The Vincennes Incident as a Chinese Room
We argue that a command and control system can undermine a commander’s moral agency if it causes him/her to process information in a purely syntactic manner, or if it precludes him/her from ascertaining the truth of that information. Our case is based on the resemblance between a commander’s circumstances and the protagonist in Searle’s Chinese Room, together with a careful reading of Aristotle’s notions of ‘compulsory’ and ‘ignorance’. We further substantiate our case by considering the Vincennes Incident, when the crew of a warship mistakenly shot down a civilian airliner. To support a combat commander’s moral agency, designers should strive for systems that help commanders and command teams to think and manipulate information at the level of meaning. ‘Down conversions’ of information from meaning to symbols must be adequately recovered by ‘up conversions’, and commanders must be able to check that their sensors are working and are being used correctly. Meanwhile ethicists should establish a mechanism that tracks the potential moral implications of choices in a system’s design and intended operation. Finally we highlight a gap in normative ethics, in that we have ways to deny moral agency, but not to affirm it
Taking Turing by Surprise? Designing Digital Computers for morally-loaded contexts
There is much to learn from what Turing hastily dismissed as Lady Lovelace s
objection. Digital computers can indeed surprise us. Just like a piece of art,
algorithms can be designed in such a way as to lead us to question our
understanding of the world, or our place within it. Some humans do lose the
capacity to be surprised in that way. It might be fear, or it might be the
comfort of ideological certainties. As lazy normative animals, we do need to be
able to rely on authorities to simplify our reasoning: that is ok. Yet the
growing sophistication of systems designed to free us from the constraints of
normative engagement may take us past a point of no-return. What if, through
lack of normative exercise, our moral muscles became so atrophied as to leave
us unable to question our social practices? This paper makes two distinct
normative claims:
1. Decision-support systems should be designed with a view to regularly
jolting us out of our moral torpor.
2. Without the depth of habit to somatically anchor model certainty, a
computer s experience of something new is very different from that which in
humans gives rise to non-trivial surprises. This asymmetry has key
repercussions when it comes to the shape of ethical agency in artificial moral
agents. The worry is not just that they would be likely to leap morally ahead
of us, unencumbered by habits. The main reason to doubt that the moral
trajectories of humans v. autonomous systems might remain compatible stems from
the asymmetry in the mechanisms underlying moral change. Whereas in humans
surprises will continue to play an important role in waking us to the need for
moral change, cognitive processes will rule when it comes to machines. This
asymmetry will translate into increasingly different moral outlooks, to the
point of likely unintelligibility. The latter prospect is enough to doubt the
desirability of autonomous moral agents
- …