17 research outputs found

    Harnessing Higher-Order (Meta-)Logic to Represent and Reason with Complex Ethical Theories

    Get PDF
    The computer-mechanization of an ambitious explicit ethical theory, Gewirth's Principle of Generic Consistency, is used to showcase an approach for representing and reasoning with ethical theories exhibiting complex logical features like alethic and deontic modalities, indexicals, higher-order quantification, among others. Harnessing the high expressive power of Church's type theory as a meta-logic to semantically embed a combination of quantified non-classical logics, our work pushes existing boundaries in knowledge representation and reasoning. We demonstrate that intuitive encodings of complex ethical theories and their automation on the computer are no longer antipodes.Comment: 14 page

    Higher-level Knowledge, Rational and Social Levels Constraints of the Common Model of the Mind

    Get PDF
    In his famous 1982 paper, Allen Newell [22, 23] introduced the notion of knowledge level to indicate a level of analysis, and prediction, of the rational behavior of a cognitive articial agent. This analysis concerns the investigation about the availability of the agent knowledge, in order to pursue its own goals, and is based on the so-called Rationality Principle (an assumption according to which "an agent will use the knowledge it has of its environment to achieve its goals" [22, p. 17]. By using the Newell's own words: "To treat a system at the knowledge level is to treat it as having some knowledge, some goals, and believing it will do whatever is within its power to attain its goals, in so far as its knowledge indicates" [22, p. 13]. In the last decades, the importance of the knowledge level has been historically and system- atically downsized by the research area in cognitive architectures (CAs), whose interests have been mainly focused on the analysis and the development of mechanisms and the processes governing human and (articial) cognition. The knowledge level in CAs, however, represents a crucial level of analysis for the development of such articial general systems and therefore deserves greater research attention [17]. In the following, we will discuss areas of broad agree- ment and outline the main problematic aspects that should be faced within a Common Model of Cognition [12]. Such aspects, departing from an analysis at the knowledge level, also clearly impact both lower (e.g. representational) and higher (e.g. social) levels

    Can We Agree on What Robots Should be Allowed to Do? An Exercise in Rule Selection for Ethical Care Robots

    Get PDF
    Future Care Robots (CRs) should be able to balance a patient’s, often conflicting, rights without ongoing supervision. Many of the trade-offs faced by such a robot will require a degree of moral judgment. Some progress has been made on methods to guarantee robots comply with a predefined set of ethical rules. In contrast, methods for selecting these rules are lacking. Approaches departing from existing philosophical frameworks, often do not result in implementable robotic control rules. Machine learning approaches are sensitive to biases in the training data and suffer from opacity. Here, we propose an alternative, empirical, survey-based approach to rule selection. We suggest this approach has several advantages, including transparency and legitimacy. The major challenge for this approach, however, is that a workable solution, or social compromise, has to be found: it must be possible to obtain a consistent and agreed-upon set of rules to govern robotic behavior. In this article, we present an exercise in rule selection for a hypothetical CR to assess the feasibility of our approach. We assume the role of robot developers using a survey to evaluate which robot behavior potential users deem appropriate in a practically relevant setting, i.e., patient non-compliance. We evaluate whether it is possible to find such behaviors through a consensus. Assessing a set of potential robot behaviors, we surveyed the acceptability of robot actions that potentially violate a patient’s autonomy or privacy. Our data support the empirical approach as a promising and cost-effective way to query ethical intuitions, allowing us to select behavior for the hypothetical CR

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    Moral Competence and Moral Orientation in Robots

    Get PDF
    Two major strategies (the top-down and bottom-up strategies) are currently discussed in robot ethics for moral integration. I will argue that both strategies are not sufficient. Instead, I agree with Bertram F. Malle and Matthias Scheutz that robots need to be equipped with moral competence if we don’t want them to be a potential risk in society, causing harm, social problems or conflicts. However, I claim that we should not define moral competence merely as a result of different “elements” or “components” we can randomly change. My suggestion is to follow Georg Lind’s dual aspect dual layer theory of moral self that provides a broader perspective and another vocabulary for the discussion in robot ethics. According to Lind, moral competence is only one aspect of moral behavior that we cannot separate from its second aspect: moral orientation. As a result, the thesis of this paper is that integrating morality into robots has to include moral orientation and moral competence

    Programming Ethics in Self-Driving Cars: Ethical Dilemma

    Get PDF
    Self-driving cars promise to revolutionize the automotive industry. Besides being productive and fuel-efficient, they would be significantly safer than the human-operated cars. However, in the rare case when they do get into an accident, they can calculate and opt for the reactive measure to take based on their programming. The manufacturer, it seems, must decide the ethics that self-driving cars should follow in such scenarios. In circumstances where they must choose between the lives of the passengers and the pedestrians, some researchers have argued that the best solution is to choose for the safety of the passengers over the pedestrians. Such a strategy, they argue will make sense as cars will have better control over the passenger and, therefore, will help in faster adoption of self-driving cars. However, this line of thinking seems simplistic at best. The author suggests areexamination of the ethical issues while taking into account the social and technological aspects as well

    Ethics and Morality in AI - A Systematic Literature Review and Future Research

    Get PDF
    Artificial intelligence (AI) has become an integral part of our daily lives in recent years. At the same time, the topic of ethics and morality in the context of AI has been discussed in both practical and scientific discourse. Either it deals with ethical concerns, concrete application areas, the programming of AI or its moral status. However, no article can be found that provides an overview of the combination of ethics, morality and AI and systematizes them. Thus, this paper provides a systematic literature review on ethics and morality in the context of AI examining the scientific literature between the years 2017 and 2021. The search resulted in 1,641 articles across five databases of which 224 articles were included in the evaluation. Literature was systematized into seven topics presented in this paper. Implications of this review can be valuable not only for academia, but also for practitioners

    A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

    Get PDF
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artifcial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specifc ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all things considered judgment regarding the moral case for building AMAs” (Formosa and Ryan 2019, pp. 1–2). Although this strategy may beneft their acceptability, it may also detract from their ethical rootedness, coherence, and persuasiveness, characteristics often associated with consolidated ethical traditions. Neo-Aristotelian ethics, backed by a distinctive philosophical anthropology and worldview, is summoned to fll this gap as a standard to test these two opposing claims. It provides a substantive account of moral agency through the theory of voluntary action; it explains how voluntary action is tied to intelligent and autonomous human life; and it distinguishes machine operations from voluntary actions through the categories of poiesis and praxis respectively. This standpoint reveals that while Van Wynsberghe and Robbins may be right in rejecting the need for AMAs, there are deeper, more fundamental reasons. In addition, despite disagreeing with Formosa and Ryan’s defense of AMAs, their call for a more nuanced and context-dependent approach, similar to neo-Aristotelian practical wisdom, becomes expedient

    There Is No Agency Without Attention

    Get PDF
    For decades AI researchers have built agents that are capable of carrying out tasks that require human-level or human-like intelligence. During this time, questions of how these programs compared in kind to humans have surfaced and led to beneficial interdisciplinary discussions, but conceptual progress has been slower than technological progress. Within the past decade, the term agency has taken on new import as intelligent agents have become a noticeable part of our everyday lives. Research on autonomous vehicles and personal assistants has expanded into private industry with new and increasingly capable products surfacing as a matter of routine. This wider use of AI technologies has raised questions about legal and moral agency at the highest levels of government (National Science and Technology Council 2016) and drawn the interest of other academic disciplines and the general public. Within this context, the notion of an intelligent agent in AI is too coarse and in need of refinement. We suggest that the space of AI agents can be subdivided into classes, where each class is defined by an associated degree of control

    Higher-level Knowledge, Rational and Social Levels Constraints of the Common Model of the Mind

    Get PDF
    We present the input to the discussion about the computational framework known as Common Model of Cognition (CMC) from the working group dealing with the knowledge/rational/social levels. In particular, we present a list of the higher level constraints that should be addressed within such a general framework
    corecore