23,590 research outputs found

    Taking Turing by Surprise? Designing Digital Computers for morally-loaded contexts

    Full text link
    There is much to learn from what Turing hastily dismissed as Lady Lovelace s objection. Digital computers can indeed surprise us. Just like a piece of art, algorithms can be designed in such a way as to lead us to question our understanding of the world, or our place within it. Some humans do lose the capacity to be surprised in that way. It might be fear, or it might be the comfort of ideological certainties. As lazy normative animals, we do need to be able to rely on authorities to simplify our reasoning: that is ok. Yet the growing sophistication of systems designed to free us from the constraints of normative engagement may take us past a point of no-return. What if, through lack of normative exercise, our moral muscles became so atrophied as to leave us unable to question our social practices? This paper makes two distinct normative claims: 1. Decision-support systems should be designed with a view to regularly jolting us out of our moral torpor. 2. Without the depth of habit to somatically anchor model certainty, a computer s experience of something new is very different from that which in humans gives rise to non-trivial surprises. This asymmetry has key repercussions when it comes to the shape of ethical agency in artificial moral agents. The worry is not just that they would be likely to leap morally ahead of us, unencumbered by habits. The main reason to doubt that the moral trajectories of humans v. autonomous systems might remain compatible stems from the asymmetry in the mechanisms underlying moral change. Whereas in humans surprises will continue to play an important role in waking us to the need for moral change, cognitive processes will rule when it comes to machines. This asymmetry will translate into increasingly different moral outlooks, to the point of likely unintelligibility. The latter prospect is enough to doubt the desirability of autonomous moral agents

    Why Internal Moral Enhancement Might Be politically Better than External Moral Enhancement

    Get PDF
    Technology could be used to improve morality but it could do so in different ways. Some technologies could augment and enhance moral behaviour externally by using external cues and signals to push and pull us towards morally appropriate behaviours. Other technologies could enhance moral behaviour internally by directly altering the way in which the brain captures and processes morally salient information or initiates moral action. The question is whether there is any reason to prefer one method over the other? In this article, I argue that there is. Specifically, I argue that internal moral enhancement is likely to be preferable to external moral enhancement, when it comes to the legitimacy of political decision-making processes. In fact, I go further than this and argue that the increasingly dominant forms of external moral enhancement may already be posing a significant threat to political legitimacy, one that we should try to address. Consequently, research and development of internal moral enhancements should be prioritised as a political project

    Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism

    Get PDF
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory ā€“ ā€˜ethical behaviourismā€™ ā€“ which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they havenā€™t done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of ā€˜procreative beneficenceā€™ towards robots

    Real Virtuality: A Code of Ethical Conduct. Recommendations for Good Scientific Practice and the Consumers of VR-Technology

    Get PDF
    The goal of this article is to present a first list of ethical concerns that may arise from research and personal use of virtual reality (VR) and related technology, and to offer concrete recommendations for minimizing those risks. Many of the recommendations call for focused research initiatives. In the first part of the article, we discuss the relevant evidence from psychology that motivates our concerns. In Section ā€œPlasticity in the Human Mind,ā€ we cover some of the main results suggesting that oneā€™s environment can influence oneā€™s psychological states, as well as recent work on inducing illusions of embodiment. Then, in Section ā€œIllusions of Embodiment and Their Lasting Effect,ā€ we go on to discuss recent evidence indicating that immersion in VR can have psychological effects that last after leaving the virtual environment. In the second part of the article, we turn to the risks and recommendations. We begin, in Section ā€œThe Research Ethics of VR,ā€ with the research ethics of VR, covering six main topics: the limits of experimental environments, informed consent, clinical risks, dual-use, online research, and a general point about the limitations of a code of conduct for research. Then, in Section ā€œRisks for Individuals and Society,ā€ we turn to the risks of VR for the general public, covering four main topics: long-term immersion, neglect of the social and physical environment, risky content, and privacy. We offer concrete recommendations for each of these 10 topics, summarized in Table 1

    Autonomous Systems as Legal Agents: Directly by the Recognition of Personhood or Indirectly by the Alchemy of Algorithmic Entities

    Get PDF
    The clinical manifestations of platelet dense (Ī“) granule defects are easy bruising, as well as epistaxis and bleeding after delivery, tooth extractions and surgical procedures. The observed symptoms may be explained either by a decreased number of granules or by a defect in the uptake/release of granule contents. We have developed a method to study platelet dense granule storage and release. The uptake of the fluorescent marker, mepacrine, into the platelet dense granule was measured using flow cytometry. The platelet population was identified by the size and binding of a phycoerythrin-conjugated antibody against GPIb. Cells within the discrimination frame were analysed for green (mepacrine) fluorescence. Both resting platelets and platelets previously stimulated with collagen and the thrombin receptor agonist peptide SFLLRN was analysed for mepacrine uptake. By subtracting the value for mepacrine uptake after stimulation from the value for uptake without stimulation for each individual, the platelet dense granule release capacity could be estimated. Whole blood samples from 22 healthy individuals were analysed. Mepacrine incubation without previous stimulation gave mean fluorescence intensity (MFI) values of 83Ā±6 (mean Ā± 1 SD, range 69ā€“91). The difference in MFI between resting and stimulated platelets was 28Ā±7 (range 17ā€“40). Six members of a family, of whom one had a known Ī“-storage pool disease, were analysed. The two members (mother and son) who had prolonged bleeding times also had MFI values disparate from the normal population in this analysis. The values of one daughter with mild bleeding problems but a normal bleeding time were in the lower part of the reference interval

    Machine Medical Ethics

    Get PDF
    In medical settings, machines are in close proximity with human beings: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. Machines in these contexts are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for empathy and emotion detection necessary? What about consciousness? The essays in this collection by researchers from both humanities and science describe various theoretical and experimental approaches to adding medical ethics to a machine, what design features are necessary in order to achieve this, philosophical and practical questions concerning justice, rights, decision-making and responsibility, and accurately modeling essential physician-machine-patient relationships. This collection is the first book to address these 21st-century concerns
    • ā€¦
    corecore