185,475 research outputs found

    Clarifying ethical intuitionism

    Get PDF
    In recent years there has been a resurgence of interest in Ethical Intuitionism, whose core claim is that normal ethical agents can and do have non-inferentially justified first-order ethical beliefs. Although this is the standard formulation, there are two senses in which it is importantly incomplete. Firstly, ethical intuitionism claims that there are non-inferentially justified ethical beliefs, but there is a worrying lack of consensus in the ethical literature as to what non-inferentially justified belief is. Secondly, it has been overlooked that there are plausibly different types of non-inferential justification, and that accounting for the existence of a specific sort of non-inferential justification is crucial for any adequate ethical intuitionist epistemology. In this context, it is the purpose of this paper to provide an account of non- inferentially justified belief which is superior to extant accounts, and, to give a refined statement of the core claim of ethical intuitionism which focuses on the type of non- inferential justification vital for a plausible intuitionist epistemology. Finally, it will be shown that the clarifications made in this paper make it far from obvious that two intuitionist accounts, which have received much recent attention, make good on intuitionism’s core claim

    Philosophical Signposts for Artificial Moral Agent Frameworks

    Get PDF
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency. At the very least, the said philosophical concepts may be treated as signposts for further research on how to truly account for the nature of Artificial Moral Agents

    Harnessing Higher-Order (Meta-)Logic to Represent and Reason with Complex Ethical Theories

    Get PDF
    The computer-mechanization of an ambitious explicit ethical theory, Gewirth's Principle of Generic Consistency, is used to showcase an approach for representing and reasoning with ethical theories exhibiting complex logical features like alethic and deontic modalities, indexicals, higher-order quantification, among others. Harnessing the high expressive power of Church's type theory as a meta-logic to semantically embed a combination of quantified non-classical logics, our work pushes existing boundaries in knowledge representation and reasoning. We demonstrate that intuitive encodings of complex ethical theories and their automation on the computer are no longer antipodes.Comment: 14 page

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    Responsible Autonomy

    Full text link
    As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence

    The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms and Argumentation

    Get PDF
    An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and is interacting with end users. All of these actors are stakeholders affected by the behavior of the autonomous system. We address the challenge of how the ethical views of such stakeholders can be integrated in the behavior of the autonomous system. We propose an ethical recommendation component, which we call Jiminy, that uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. Jiminy represents the ethical views of each stakeholder by using normative systems, and has three ways of resolving moral dilemmas involving the opinions of the stakeholders. First, Jiminy considers how the arguments of the stakeholders relate to one another, which may already resolve the dilemma. Secondly, Jiminy combines the normative systems of the stakeholders such that the combined expertise of the stakeholders may resolve the dilemma. Thirdly, and only if these two other methods have failed, Jiminy uses context-sensitive rules to decide which of the stakeholders take preference. At the abstract level, these three methods are characterized by the addition of arguments, the addition of attacks among arguments, and the removal of attacks among arguments. We show how Jiminy can be used not only for ethical reasoning and collaborative decision making, but also for providing explanations about ethical behavior

    Schizophrenia and the Virtues of Self-Effacement

    Get PDF
    Michael Stocker’s “The Schizophrenia of Modern Ethical Theories” attacks versions of consequentialism and deontological ethics on the grounds that they are self-effacing. While it is often thought that Stocker’s argument gives us a reason to favour virtue ethics over those other theories, Simon Keller has argued that this is a mistake. He claims that virtue ethics is also self-effacing, and is therefore afflicted with the self-effacement- related problems that Stocker identifies in consequentialism and deontology. This paper defends virtue ethics against this claim. Although there is a kind of self-effacement invol- ved in the exercise of virtue, this is quite different from the so-called schizophrenia that Stocker thinks is induced by modern ethical theory. Importantly, manifesting virtue does not require one to embrace mutually inconsistent moral commitments, as is at times encouraged by consequentialists and deontologists. This paper also considers a reading of the virtue-ethical criterion of right action that is encouraged by Bernard Williams’s distinction between a de re and a de dicto interpretation of the phrase “acting as the virtuous person would.” I argue that such a reading addresses concerns that a virtue-ethi- cal criterion of right action inevitably generates a problematic form of self-effacement

    Artificial morality: Making of the artificial moral agents

    Get PDF
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and court algorithms. It is therefore important to create reliable and responsible machines based on the same ethical principles that society demands from people. New challenges in creating such agents appear. There are philosophical questions about a machine’s potential to be an agent, or mora l agent, in the first place. Then comes the problem of social acceptance of such machines, regardless of their theoretic agency status. As a result of efforts to resolve this problem, there are insinuations of needed additional psychological (emotional and cogn itive) competence in cold moral machines. What makes this endeavour of developing AMAs even harder is the complexity of the technical, engineering aspect of their creation. Implementation approaches such as top- down, bottom-up and hybrid approach aim to find the best way of developing fully moral agents, but they encounter their own problems throughout this effort
    • …
    corecore