1,594 research outputs found

    A Softwaremodule for an Ethical Elder Care Robot. Design and Implementation

    Get PDF
    The development of increasingly intelligent and autonomous technologies will eventually lead to these systems having to face morally problematic situations. This is particularly true of artificial systems that are used in geriatric care environments. The goal of this article is to describe how one can approach the design of an elder care robot which is capable of moral decision-making and moral learning. A conceptual design for the development of such a system is provided and the steps that are necessary to implement it are described

    Machine Medical Ethics

    Get PDF
    In medical settings, machines are in close proximity with human beings: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. Machines in these contexts are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for empathy and emotion detection necessary? What about consciousness? The essays in this collection by researchers from both humanities and science describe various theoretical and experimental approaches to adding medical ethics to a machine, what design features are necessary in order to achieve this, philosophical and practical questions concerning justice, rights, decision-making and responsibility, and accurately modeling essential physician-machine-patient relationships. This collection is the first book to address these 21st-century concerns

    Robot Betrayal: a guide to the ethics of robotic deception

    Get PDF
    If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in order to think clearly about its ethics. Second, it argues that the second type of deception – superficial state deception – is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use

    Towards Verifiably Ethical Robot Behaviour

    Full text link
    Ensuring that autonomous systems work ethically is both complex and difficult. However, the idea of having an additional `governor' that assesses options the system has, and prunes them to select the most ethical choices is well understood. Recent work has produced such a governor consisting of a `consequence engine' that assesses the likely future outcomes of actions then applies a Safety/Ethical logic to select actions. Although this is appealing, it is impossible to be certain that the most ethical options are actually taken. In this paper we extend and apply a well-known agent verification approach to our consequence engine, allowing us to verify the correctness of its ethical decision-making.Comment: Presented at the 1st International Workshop on AI and Ethics, Sunday 25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the workshop proceedings published by AAA

    Technologies on the stand:Legal and ethical questions in neuroscience and robotics

    Get PDF

    The Reasonableness Machine

    Get PDF
    Automation might someday allow for the inexpensive creation of highly contextualized and effective laws. If that ever comes to pass, however, it will not be on a blank slate. Proponents will face the question of how to computerize bedrock aspects of our existing law, some of which are legal standards—norms that use evaluative, even moral, criteria. Conventional wisdom says that standards are difficult to translate into computer code because they do not present clear operational mechanisms to follow. If that wisdom holds, one could reasonably doubt that legal automation will ever get off the ground. Conventional wisdom, however, fails to account for the interpretive freedom that standards provide. Their murkiness makes them a fertile ground for the growth of competing explanations of their legal meaning. Some of those readings might be more rule-like than others. Proponents of automation will likely be drawn to those rule-like interpretations, so long as they are compatible enough with existing law. This complex dynamic between computer-friendliness and legal interpretation makes it troublesome for legislators to identify the variable and fixed costs of automation. This Article aims to shed light on this relationship by focusing our attention on a quintessential legal standard at the center of our legal system—the Reasonably Prudent Person Test. Here, I explain how automation proponents might be tempted by fringe, formulaic interpretations of the test, such as Averageness, because they bring comparatively low innovation costs. With time, however, technological advancement will likely drive down innovation costs, and mainstream interpretations, like Conventionalism, could find favor again. Regardless of the interpretation that proponents favor, though, an unavoidable fixed cost looms: by replacing the jurors who apply the test with a machine, they will eliminate a long-valued avenue for participatory and deliberative democracy

    Automated Governance

    Get PDF

    Medically Valid Religious Beliefs

    Get PDF
    This dissertation explores conflicts between religion and medicine, cases in which cultural and religious beliefs motivate requests for inappropriate treatment or the cessation of treatment, requests that violate the standard of care. I call such requests M-requests (miracle or martyr requests). I argue that current approaches fail to accord proper respect to patients who make such requests. Sometimes they are too permissive, honoring M-requests when they should not; other times they are too strict. I propose a phronesis-based approach to decide whether to honor an M-request or whether religious beliefs are medically valid. This approach is culturally sensitive, takes religious beliefs seriously, and holds them to a high ethical standard. This approach uses a principle of belief evaluation developed by Linda Zagzebski: The Principle of Rational Belief, which is founded upon Aristotelian virtue ethics. In addition to the Principle, I propose a concrete set of conditions to assist caregivers in clinical case evaluations. In the final chapters, I apply the phronesis-based approach to well-known adult cases such as the refusal of blood transfusions by Jehovah’s Witnesses and requests for continued (futile) care by Orthodox Jews at the end of life. Also, I consider cases involving children such as African female circumcision and cases of faith healing. I argue that The Principle of Rational Belief should define the threshold of the kinds of M-requests for children that can be honored, but I allow a lower threshold for M-requests made by competent adult patients

    Narrative Capacity

    Get PDF
    The doctrine of capacity is a fundamental threshold to the protections of private law. The law only recognizes private decision-making—from exercising the right to transfer or bequeath property and entering into a contract to getting married or divorced—made with the level of cognitive functioning that the capacity doctrine demands. When the doctrine goes wrong, it denies individuals, particularly older adults, access to basic private-law rights on the one hand and ratifies decision-making that may tear apart families and tarnish legacies on the other.The capacity doctrine in private law is built on a fundamental philosophical mismatch. It is grounded in a cognitive theory of personhood, and determines whether to recognize private decisions based on the cognitive abilities thought by philosophers to entitle persons in general to unique moral status. But to align with the purposes of the substantive doctrines of property and contract, private-law capacity should instead be grounded in a narrative theory of personal identity. Rather than asking whether a decision-maker is a person by measuring their cognitive abilities, the doctrine should ask whether they are the same person by looking to the story of their life.This Article argues for a new doctrine of capacity under which the law would recognize personal decision-making if and only if it is linked by a coherent narrative structure to the story of the decision-maker’s life. Moreover, the Article offers a test for determining which decisions meet this criterion and explains how the doctrine would work in practice
    • …
    corecore