43,409 research outputs found

    A Case for Machine Ethics in Modeling Human-Level Intelligent Agents

    Get PDF
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts

    Alert-BDI: BDI Model with Adaptive Alertness through Situational Awareness

    Full text link
    In this paper, we address the problems faced by a group of agents that possess situational awareness, but lack a security mechanism, by the introduction of a adaptive risk management system. The Belief-Desire-Intention (BDI) architecture lacks a framework that would facilitate an adaptive risk management system that uses the situational awareness of the agents. We extend the BDI architecture with the concept of adaptive alertness. Agents can modify their level of alertness by monitoring the risks faced by them and by their peers. Alert-BDI enables the agents to detect and assess the risks faced by them in an efficient manner, thereby increasing operational efficiency and resistance against attacks.Comment: 14 pages, 3 figures. Submitted to ICACCI 2013, Mysore, Indi

    Information system support in construction industry with semantic web technologies and/or autonomous reasoning agents

    Get PDF
    Information technology support is hard to find for the early design phases of the architectural design process. Many of the existing issues in such design decision support tools appear to be caused by a mismatch between the ways in which designers think and the ways in which information systems aim to give support. We therefore started an investigation of existing theories of design thinking, compared to the way in which design decision support systems provide information to the designer. We identify two main strategies towards information system support in the early design phase: (1) applications for making design try-outs, and (2) applications as autonomous reasoning agents. We outline preview implementations for both approaches and indicate to what extent these strategies can be used to improve information system support for the architectural designer

    On Automating the Doctrine of Double Effect

    Full text link
    The doctrine of double effect (DDE\mathcal{DDE}) is a long-studied ethical principle that governs when actions that have both positive and negative effects are to be allowed. The goal in this paper is to automate DDE\mathcal{DDE}. We briefly present DDE\mathcal{DDE}, and use a first-order modal logic, the deontic cognitive event calculus, as our framework to formalize the doctrine. We present formalizations of increasingly stronger versions of the principle, including what is known as the doctrine of triple effect. We then use our framework to simulate successfully scenarios that have been used to test for the presence of the principle in human subjects. Our framework can be used in two different modes: One can use it to build DDE\mathcal{DDE}-compliant autonomous systems from scratch, or one can use it to verify that a given AI system is DDE\mathcal{DDE}-compliant, by applying a DDE\mathcal{DDE} layer on an existing system or model. For the latter mode, the underlying AI system can be built using any architecture (planners, deep neural networks, bayesian networks, knowledge-representation systems, or a hybrid); as long as the system exposes a few parameters in its model, such verification is possible. The role of the DDE\mathcal{DDE} layer here is akin to a (dynamic or static) software verifier that examines existing software modules. Finally, we end by presenting initial work on how one can apply our DDE\mathcal{DDE} layer to the STRIPS-style planning model, and to a modified POMDP model.This is preliminary work to illustrate the feasibility of the second mode, and we hope that our initial sketches can be useful for other researchers in incorporating DDE in their own frameworks.Comment: 26th International Joint Conference on Artificial Intelligence 2017; Special Track on AI & Autonom
    • …
    corecore