751 research outputs found

    Moral Competence and Moral Orientation in Robots

    Get PDF
    Two major strategies (the top-down and bottom-up strategies) are currently discussed in robot ethics for moral integration. I will argue that both strategies are not sufficient. Instead, I agree with Bertram F. Malle and Matthias Scheutz that robots need to be equipped with moral competence if we don’t want them to be a potential risk in society, causing harm, social problems or conflicts. However, I claim that we should not define moral competence merely as a result of different “elements” or “components” we can randomly change. My suggestion is to follow Georg Lind’s dual aspect dual layer theory of moral self that provides a broader perspective and another vocabulary for the discussion in robot ethics. According to Lind, moral competence is only one aspect of moral behavior that we cannot separate from its second aspect: moral orientation. As a result, the thesis of this paper is that integrating morality into robots has to include moral orientation and moral competence

    Technologies on the stand:Legal and ethical questions in neuroscience and robotics

    Get PDF

    Can human and artificial agents share an autonomy, categorical imperative-based ethics and “moral” selfhood?

    Get PDF
    AI designers endeavour to improve ‘autonomy’ in artificial intelligent devices, as recent developments show. This chapter firstly argues against attributing metaphysical attitudes to AI and, simultaneously, in favor of improving autonomous AI which has been enabled to respect autonomy in human agents. This seems to be the only responsible way of making further advances in the field of autonomous social AI. Let us examine what is meant by claims such as designing our artificial alter egos and sharing moral selves with artificial humanoid devices as well as providing autonomous AI with an ethical framework modelled upon the core aspects of moral selfhood, e.g., making decisions which are based on autonomous law-giving, in Kantian terms

    Machine Medical Ethics

    Get PDF
    In medical settings, machines are in close proximity with human beings: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. Machines in these contexts are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for empathy and emotion detection necessary? What about consciousness? The essays in this collection by researchers from both humanities and science describe various theoretical and experimental approaches to adding medical ethics to a machine, what design features are necessary in order to achieve this, philosophical and practical questions concerning justice, rights, decision-making and responsibility, and accurately modeling essential physician-machine-patient relationships. This collection is the first book to address these 21st-century concerns

    ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics

    Get PDF
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under a normative-cognitive perspective, the minimum criteria through which we could recognize an artificial entity as a genuine moral entity. Although my proposal should be considered from a reasonable level of abstraction, I will critically analyze and identify how an artificial agent could integrate those cognitive features. Finally, I intend to discuss their limitations or possibilities

    Debunking (the) Retribution (Gap)

    Get PDF
    Robotization is an increasingly pervasive feature of our lives. Robots with high degrees of autonomy may cause harm, yet in sufciently complex systems neither the robots nor the human developers may be candidates for moral blame. John Danaher has recently argued that this may lead to a retribution gap, where the human desire for retribution faces a lack of appropriate subjects for retributive blame. The potential social and moral implications of a retribution gap are considerable. I argue that the retributive intuitions that feed into retribution gaps are best understood as deontological intuitions. I apply a debunking argument for deontological intuitions in order to show that retributive intuitions cannot be used to justify retributive punishment in cases of robot harm without clear candidates for blame. The fundamental moral question thus becomes what we ought to do with these retributive intuitions, given that they do not justify retribution. I draw a parallel from recent work on implicit biases to make a case for taking moral responsibility for retributive intuitions. In the same way that we can exert some form of control over our unwanted implicit biases, we can and should do so for unjustifed retributive intuitions in cases of robot harm

    The Future of Military Virtue: Autonomous Systems and the Moral Deskilling of the Military

    Get PDF
    Autonomous systems, including unmanned aerial vehicles (UAVs), anti-munitions systems, armed robots, cyber attack and cyber defense systems, are projected to become the centerpiece of 21st century military and counter-terrorism operations. This trend has challenged legal experts, policymakers and military ethicists to make sense of these developments within existing normative frameworks of international law and just war theory. This paper highlights a different yet equally profound ethical challenge: understanding how this trend may lead to a moral deskilling of the military profession, potentially destabilizing traditional norms of military virtue and their power to motivate ethical restraint in the conduct of war. Employing the normative framework of virtue ethics, I argue that professional ideals of military virtue such as courage, integrity, honor and compassion help to distinguish legitimate uses of military force from amoral, criminal or mercenary violence, while also preserving the conception of moral community needed to secure a meaningful peace in war’s aftermath. The cultivation of these virtues in a human being, however, presupposes repeated practice and development of skills of moral analysis, deliberation and action, especially in the ethical use of force. As in the historical deskilling of other professions, human practices critical to cultivating these skills can be made redundant by autonomous or semi-autonomous machines, with a resulting devaluation and/or loss of these skills and the virtues they facilitate. This paper explores the circumstances under which automated methods of warfare, including automated weapons and cyber systems, could lead to a dangerous ‘moral deskilling’ of the military profession. I point out that this deskilling remains a significant risk even with a commitment to ‘human on the loop’ protocols. I conclude by summarizing the potentially deleterious consequences of such an outcome, and reflecting on possible strategies for its prevention
    corecore