2,693 research outputs found

    The Hybrid Ethical Reasoning Agent IMMANUEL

    Get PDF

    Harnessing Higher-Order (Meta-)Logic to Represent and Reason with Complex Ethical Theories

    Get PDF
    The computer-mechanization of an ambitious explicit ethical theory, Gewirth's Principle of Generic Consistency, is used to showcase an approach for representing and reasoning with ethical theories exhibiting complex logical features like alethic and deontic modalities, indexicals, higher-order quantification, among others. Harnessing the high expressive power of Church's type theory as a meta-logic to semantically embed a combination of quantified non-classical logics, our work pushes existing boundaries in knowledge representation and reasoning. We demonstrate that intuitive encodings of complex ethical theories and their automation on the computer are no longer antipodes.Comment: 14 page

    A Formalization of Kant's Second Formulation of the Categorical Imperative

    Full text link
    We present a formalization and computational implementation of the second formulation of Kant's categorical imperative. This ethical principle requires an agent to never treat someone merely as a means but always also as an end. Here we interpret this principle in terms of how persons are causally affected by actions. We introduce Kantian causal agency models in which moral patients, actions, goals, and causal influence are represented, and we show how to formalize several readings of Kant's categorical imperative that correspond to Kant's concept of strict and wide duties towards oneself and others. Stricter versions handle cases where an action directly causally affects oneself or others, whereas the wide version maximizes the number of persons being treated as an end. We discuss limitations of our formalization by pointing to one of Kant's cases that the machinery cannot handle in a satisfying way

    A Proposed Prolegomenon for Normative Theological Ethics with a Special Emphasis on the Usus Didacticus of God\u27s Law

    Get PDF
    The purpose of this study is to examine and organize some of the current contrasting methodologies of theological ethics in an attempt to determine the Biblical method of choosing the moral option. This will be done in two different ways. In the first part, two common methods in moral philosophy, the deontological method and the teleological method, will be defined and illustrated. It will be demonstrated that Scriptural ethics has elements in common with both rule deontology and rule teleology. In the second part, the Scriptural method of moral reasoning will be examined more closely by comparing three different ways that numerous absolute prescriptive commands are used in theological ethics. Of the three methods discussed it will be shown that two contradict the moral methodology of the Holy Scriptures. Only the method of conflicting absolutism will prove to be satisfactory. This is the only method that contains elements in common with both rule deontology and rule teleology. The conclusion reached will stress that the Scriptural method of theological ethics not only emphasizes characteristics of both deontology and teleology, but it also emphasizes that these characteristics are to be used in a very precise and specific way. The Scriptural method is similar to rule deontology; however, when there is a conflict of duties the rule teleological element serves as the arbitrator to determine the lesser evil. When this is understood one can begin to have a prolegomenon for theological ethics that properly incorporates the usus didacticus of God\u27s law

    The Survey, Taxonomy, and Future Directions of Trustworthy AI: A Meta Decision of Strategic Decisions

    Full text link
    When making strategic decisions, we are often confronted with overwhelming information to process. The situation can be further complicated when some pieces of evidence are contradicted each other or paradoxical. The challenge then becomes how to determine which information is useful and which ones should be eliminated. This process is known as meta-decision. Likewise, when it comes to using Artificial Intelligence (AI) systems for strategic decision-making, placing trust in the AI itself becomes a meta-decision, given that many AI systems are viewed as opaque "black boxes" that process large amounts of data. Trusting an opaque system involves deciding on the level of Trustworthy AI (TAI). We propose a new approach to address this issue by introducing a novel taxonomy or framework of TAI, which encompasses three crucial domains: articulate, authentic, and basic for different levels of trust. To underpin these domains, we create ten dimensions to measure trust: explainability/transparency, fairness/diversity, generalizability, privacy, data governance, safety/robustness, accountability, reproducibility, reliability, and sustainability. We aim to use this taxonomy to conduct a comprehensive survey and explore different TAI approaches from a strategic decision-making perspective
    corecore