2,150 research outputs found

    On Automating the Doctrine of Double Effect

    Full text link
    The doctrine of double effect (DDE\mathcal{DDE}) is a long-studied ethical principle that governs when actions that have both positive and negative effects are to be allowed. The goal in this paper is to automate DDE\mathcal{DDE}. We briefly present DDE\mathcal{DDE}, and use a first-order modal logic, the deontic cognitive event calculus, as our framework to formalize the doctrine. We present formalizations of increasingly stronger versions of the principle, including what is known as the doctrine of triple effect. We then use our framework to simulate successfully scenarios that have been used to test for the presence of the principle in human subjects. Our framework can be used in two different modes: One can use it to build DDE\mathcal{DDE}-compliant autonomous systems from scratch, or one can use it to verify that a given AI system is DDE\mathcal{DDE}-compliant, by applying a DDE\mathcal{DDE} layer on an existing system or model. For the latter mode, the underlying AI system can be built using any architecture (planners, deep neural networks, bayesian networks, knowledge-representation systems, or a hybrid); as long as the system exposes a few parameters in its model, such verification is possible. The role of the DDE\mathcal{DDE} layer here is akin to a (dynamic or static) software verifier that examines existing software modules. Finally, we end by presenting initial work on how one can apply our DDE\mathcal{DDE} layer to the STRIPS-style planning model, and to a modified POMDP model.This is preliminary work to illustrate the feasibility of the second mode, and we hope that our initial sketches can be useful for other researchers in incorporating DDE in their own frameworks.Comment: 26th International Joint Conference on Artificial Intelligence 2017; Special Track on AI & Autonom

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    A Rule of Persons, Not Machines: The Limits of Legal Automation

    Get PDF

    Perception is Everything: Repairing the Image of American Drone Warfare

    Get PDF
    This thesis will trace the United States’ development of unmanned warfare from its initial use in the World Wars through the Cold War to its final maturation in the War on Terror. The examination will provide a summary of unmanned warfare’s history, its gradual adoption, and concerns regarding the proliferation of drones use to understand the emphasis on unmanned weapons in the American Military. In each phase of development, a single program will be focused on to highlight special areas of interest in the modern day. Finally, the modern era of unmanned systems will focus on the growing integration of new weapon systems which no longer fulfill niche roles in the armory but act as fully vetted frontline combatants. Brought together, this examination will show drones have earned their place as integral tools in the American military inventory as faithful defenders of democracy

    Toward Formalizing Teleportation of Pedagogical Artificial Agents

    Get PDF
    Our paradigm for the use of artificial agents to teach requires among other things that they persist through time in their interaction with human students, in such a way that they “teleport” or “migrate” from an embodiment at one time t to a different embodiment at later time t\u27. In this short paper, we report on initial steps toward the formalization of such teleportation, in order to enable an overseeing AI system to establish, mechanically, and verifiably, that the human students in question will likely believe that the very same artificial agent has persisted across such times despite the different embodiments. The system achieves this by demonstrating to the students that different embodiments share one or more privileged beliefs that only one single agent can possess
    • …
    corecore