6,086 research outputs found

    Robot rights? Towards a social-relational justification of moral consideration \ud

    Get PDF
    Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration to some intelligent social robots: it sketches a novel argument for moral consideration based on social relations. It is shown that to further develop this argument we need to revise our existing ontological and social-political frameworks. It is suggested that we need a social ecology, which may be developed by engaging with Western ecology and Eastern worldviews. Although this relational turn raises many difficult issues and requires more work, this paper provides a rough outline of an alternative approach to moral consideration that can assist us in shaping our relations to intelligent robots and, by extension, to all artificial and biological entities that appear to us as more than instruments for our human purpose

    Philosophical Signposts for Artificial Moral Agent Frameworks

    Get PDF
    This article focuses on a particular issue under machine ethicsā€”that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency. At the very least, the said philosophical concepts may be treated as signposts for further research on how to truly account for the nature of Artificial Moral Agents

    Harnessing Higher-Order (Meta-)Logic to Represent and Reason with Complex Ethical Theories

    Get PDF
    The computer-mechanization of an ambitious explicit ethical theory, Gewirth's Principle of Generic Consistency, is used to showcase an approach for representing and reasoning with ethical theories exhibiting complex logical features like alethic and deontic modalities, indexicals, higher-order quantification, among others. Harnessing the high expressive power of Church's type theory as a meta-logic to semantically embed a combination of quantified non-classical logics, our work pushes existing boundaries in knowledge representation and reasoning. We demonstrate that intuitive encodings of complex ethical theories and their automation on the computer are no longer antipodes.Comment: 14 page

    Autonomous Systems as Legal Agents: Directly by the Recognition of Personhood or Indirectly by the Alchemy of Algorithmic Entities

    Get PDF
    The clinical manifestations of platelet dense (Ī“) granule defects are easy bruising, as well as epistaxis and bleeding after delivery, tooth extractions and surgical procedures. The observed symptoms may be explained either by a decreased number of granules or by a defect in the uptake/release of granule contents. We have developed a method to study platelet dense granule storage and release. The uptake of the fluorescent marker, mepacrine, into the platelet dense granule was measured using flow cytometry. The platelet population was identified by the size and binding of a phycoerythrin-conjugated antibody against GPIb. Cells within the discrimination frame were analysed for green (mepacrine) fluorescence. Both resting platelets and platelets previously stimulated with collagen and the thrombin receptor agonist peptide SFLLRN was analysed for mepacrine uptake. By subtracting the value for mepacrine uptake after stimulation from the value for uptake without stimulation for each individual, the platelet dense granule release capacity could be estimated. Whole blood samples from 22 healthy individuals were analysed. Mepacrine incubation without previous stimulation gave mean fluorescence intensity (MFI) values of 83Ā±6 (mean Ā± 1 SD, range 69ā€“91). The difference in MFI between resting and stimulated platelets was 28Ā±7 (range 17ā€“40). Six members of a family, of whom one had a known Ī“-storage pool disease, were analysed. The two members (mother and son) who had prolonged bleeding times also had MFI values disparate from the normal population in this analysis. The values of one daughter with mild bleeding problems but a normal bleeding time were in the lower part of the reference interval

    Who Should Bear the Risk When Self-Driving Vehicles Crash?

    Get PDF
    The moral importance of liability to harm has so far been ignored in the lively debate about what self-driving vehicles should be programmed to do when an accident is inevitable. But liability matters a great deal to just distribution of risk of harm. While morality sometimes requires simply minimizing relevant harms, this is not so when one party is liable to harm in virtue of voluntarily engaging in activity that foreseeably creates a risky situation, while having reasonable alternatives. On plausible assumptions, merely choosing to use a self-driving vehicle typically gives rise to a degree of liability, so that such vehicles should be programmed to shift the risk from bystanders to users, other things being equal. Insofar vehicles cannot be programmed to take all the factors affecting liability into account, there is a pro tanto moral reason not to introduce them, or restrict their use

    An Intervening Ethical Governor for a Robot Mediator in Patient-Caregiver Relationships

    Get PDF
    Ā© Springer International Publishing AG 2015DOI: 10.1007/978-3-319-46667-5_6Patients with Parkinsonā€™s disease (PD) experience challenges when interacting with caregivers due to their declining control over their musculature. To remedy those challenges, a robot mediator can be used to assist in the relationship between PD patients and their caregivers. In this context, a variety of ethical issues can arise. To overcome one issue in particular, providing therapeutic robots with a robot architecture that can ensure patientsā€™ and caregiversā€™ dignity is of potential value. In this paper, we describe an intervening ethical governor for a robot that enables it to ethically intervene, both to maintain effective patientā€“caregiver relationships and prevent the loss of dignity

    Are Some Animals Also Moral Agents?

    Get PDF
    Animal rights philosophers have traditionally accepted the claim that human beings are unique, but rejected the claim that our uniqueness justifies denying animals moral rights. Humans were thought to be unique specifically because we possess moral agency. In this commentary, I explore the claim that some nonhuman animals are also moral agents, and I take note of its counter-intuitive implications
    • ā€¦
    corecore