210,715 research outputs found

    Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics

    Get PDF
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" due to the reduction of morality to mechanical causation, and for redoubled efforts toward a comprehensive vision of human ethics to guide machine ethicists on the issue of moral agency. Options thus present themselves: reinterpret traditional ethics in a way that affords a comprehensive account of moral agency inclusive of both artificial and natural agents, “muddle through” regardless, or give up on the possibility. This paper pursues the first option, meets Tonkens' "challenge" and addresses Wallach's concerns through Beaver's proposed means, by "landscaping" traditional moral theory in resolution of the necessary comprehensive and inclusive account that at once draws into question the stated goals of Machine Ethics, itself

    Understanding Engineers' Drivers and Impediments for Ethical System Development: The Case of Privacy and Security Engineering

    Get PDF
    Machine ethics is a key challenge in times when digital systems play an increasing role in people's life. At the core of machine ethics is the handling of personal data and the security of machine operations. Yet, privacy and security engineering are a challenge in today's business world where personal data markets, corporate deadlines and a lag of perfectionism frame the context in which engineers need to work. Besides these organizational and market challenges, each engineer has his or her specific view on the importance of these values that can foster or inhibit taking them into consideration. We present the results of an empirical study of 124 engineers based on the Theory of Planned Behavior and Jonas' Principle of Responsibility to understand the drivers and impediments of ethical system development as far as privacy and security engineering are concerned. We find that many engineers find the two values important, but do not enjoy working on them. We also find that many struggle with the organizational environment. They face a lack of time and autonomy that is necessary for building ethical systems, even at this basic level. Organizations' privacy and security norms are often too weak or even oppose value-based design, putting engineers in conflict with their organizations. Our data indicate that it is largely engineers' individually perceived responsibility as well as a few character traits that make a positive difference

    Cryptocurrency with a Conscience: Using Artificial Intelligence to Develop Money that Advances Human Ethical Values

    Get PDF
    Cryptocurrencies like Bitcoin are offering new avenues for economic empowerment to individuals around the world. However, they also provide a powerful tool that facilitates criminal activities such as human trafficking and illegal weapons sales that cause great harm to individuals and communities. Cryptocurrency advocates have argued that the ethical dimensions of cryptocurrency are not qualitatively new, insofar as money has always been understood as a passive instrument that lacks ethical values and can be used for good or ill purposes. In this paper, we challenge such a presumption that money must be ‘value-neutral.’ Building on advances in artificial intelligence, cryptography, and machine ethics, we argue that it is possible to design artificially intelligent cryptocurrencies that are not ethically neutral but which autonomously regulate their own use in a way that reflects the ethical values of particular human beings – or even entire human societies. We propose a technological framework for such cryptocurrencies and then analyse the legal, ethical, and economic implications of their use. Finally, we suggest that the development of cryptocurrencies possessing ethical as well as monetary value can provide human beings with a new economic means of positively influencing the ethos and values of their societies

    Engineering moral machines

    Get PDF
    This article provides a short report on a recent Dagstuhl Seminar on “Engineering Moral Agents”. Imbuing robots and autonomous systems with ethical norms and values is an increasingly urgent challenge, given rapid developments in, for example, driverless cars, unmanned air vehicles (drones), and care assistant robots. Seminar participants discussed two immediate problems. A challenge for philosophical research is the formalisation of ethics in a format that lends itself to machine implementation; a challenge for computer science and robotics is the actual implementation of moral reasoning and conduct in autonomous systems. This article reports on these two challenges

    Taking Turing by Surprise? Designing Digital Computers for morally-loaded contexts

    Full text link
    There is much to learn from what Turing hastily dismissed as Lady Lovelace s objection. Digital computers can indeed surprise us. Just like a piece of art, algorithms can be designed in such a way as to lead us to question our understanding of the world, or our place within it. Some humans do lose the capacity to be surprised in that way. It might be fear, or it might be the comfort of ideological certainties. As lazy normative animals, we do need to be able to rely on authorities to simplify our reasoning: that is ok. Yet the growing sophistication of systems designed to free us from the constraints of normative engagement may take us past a point of no-return. What if, through lack of normative exercise, our moral muscles became so atrophied as to leave us unable to question our social practices? This paper makes two distinct normative claims: 1. Decision-support systems should be designed with a view to regularly jolting us out of our moral torpor. 2. Without the depth of habit to somatically anchor model certainty, a computer s experience of something new is very different from that which in humans gives rise to non-trivial surprises. This asymmetry has key repercussions when it comes to the shape of ethical agency in artificial moral agents. The worry is not just that they would be likely to leap morally ahead of us, unencumbered by habits. The main reason to doubt that the moral trajectories of humans v. autonomous systems might remain compatible stems from the asymmetry in the mechanisms underlying moral change. Whereas in humans surprises will continue to play an important role in waking us to the need for moral change, cognitive processes will rule when it comes to machines. This asymmetry will translate into increasingly different moral outlooks, to the point of likely unintelligibility. The latter prospect is enough to doubt the desirability of autonomous moral agents

    Challenges for an Ontology of Artificial Intelligence

    Get PDF
    Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be assimilated and regarded as “normal,” and (3) the tendency of human beings to anthropomorphize. This list is not intended as exhaustive, nor is it seen to preclude entirely a clear ontology, however, these challenges are a necessary set of topics for consideration. Each of these factors is seen to present a 'moving target' for discussion, which poses a challenge for both technical specialists and non-practitioners of AI systems development (e.g., philosophers and theologians) to speak meaningfully given that the corpus of AI structures and capabilities evolves at a rapid pace. Finally, we present avenues for moving forward, including opportunities for collaborative synthesis for scholars in philosophy and science

    Intelligent Agents in Military, Defense and Warfare: Ethical Issues and Concerns

    Get PDF
    Due to tremendous progress in digital electronics now intelligent and autonomous agents are gradually being adopted into the fields and domains of the military, defense and warfare. This paper tries to explore some of the inherent ethical issues, threats and some remedial issues about the impact of such systems on human civilization and existence in general. This paper discusses human ethics in contrast to machine ethics and the problems caused by non-sentient agents. A systematic study is made on paradoxes regarding the long-term advantages of such agents in military combat. This paper proposes an international standard which could be adopted by all nations to bypass the adverse effects and solve ethical issues of such intelligent agents

    Autonomous Vehicles – a New Challenge to Human Rights?

    Get PDF
    New technologies, as autonomous vehicles are, disrupt the way people exist, and con-sequently with human rights. Research devoted to artificial intelligence and robotics moves freely and the destination, for the time being, is unknown. This is the reason why special attention should be paid to the ethics of these branches of computer science in order to prevent the creation of a crisis point, when human beings are no longer neces-sary.. The aim of this paper is to examine whether such development is a new challenge to human rights law and what happens when an autonomous vehicle drives an autono-mous human being. The paper also mentions the desirable level of human control over the machine so that human dignity, from which human rights originate, is preserved
    corecore