14 research outputs found

    Unpredictability of AI

    Get PDF
    The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know terminal goals of the system. In conclusion, impact of Unpredictability on AI Safety is discussed

    Personal Universes: A Solution to the Multi-Agent Value Alignment Problem

    Get PDF
    AI Safety researchers attempting to align values of highly capable intelligent systems with those of humanity face a number of challenges including personal value extraction, multi-agent value merger and finally in-silico encoding. State-of-the-art research in value alignment shows difficulties in every stage in this process, but merger of incompatible preferences is a particularly difficult challenge to overcome. In this paper we assume that the value extraction problem will be solved and propose a possible way to implement an AI solution which optimally aligns with individual preferences of each user. We conclude by analyzing benefits and limitations of the proposed approach

    Human ≠ AGI

    Get PDF
    Terms Artificial General Intelligence (AGI) and Human-Level Artificial Intelligence (HLAI) have been used interchangeably to refer to the Holy Grail of Artificial Intelligence (AI) research, creation of a machine capable of achieving goals in a wide range of environments. However, widespread implicit assumption of equivalence between capabilities of AGI and HLAI appears to be unjustified, as humans are not general intelligences. In this paper, we will prove this distinction

    Unexplainability and Incomprehensibility of Artificial Intelligence

    Get PDF
    Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would not understand some of those explanations

    The Pharmacological Significance of Mechanical Intelligence and Artificial Stupidity

    Get PDF
    By drawing on the philosophy of Bernard Stiegler, the phenomena of mechanical (a.k.a. artificial, digital, or electronic) intelligence is explored in terms of its real significance as an ever-repeating threat of the reemergence of stupidity (as cowardice), which can be transformed into knowledge (pharmacological analysis of poisons and remedies) by practices of care, through the outlook of what researchers describe equivocally as “artificial stupidity”, which has been identified as a new direction in the future of computer science and machine problem solving as well as a new difficulty to be overcome. I weave together of web of “artificial stupidity”, which denotes the mechanic (1), the human (2), or the global (3). With regards to machine intelligence, artificial stupidity refers to: 1a) Weak A.I. or a rhetorical inversion of designating contemporary practices of narrow task-based procedures by algorithms in opposition to “True A.I.”; 1b) the restriction or employment of constraints that weaken the effectiveness of A.I., which is to say a “dumbing-down” of A.I. by intentionally introducing mistakes by programmers for safety concerns and human interaction purposes; 1c) the failure of machines to perform designated tasks; 1d) a lack of a noetic capacity, which is a lack of moral and ethical discretion; 1e) a lack of causal reasoning (true intelligence) as opposed to statistical associative “curve fitting”; or 2) the phenomenon of increasing human “stupidity” or drive-based behaviors, which is considered as the degradation of human intelligence and/or “intelligent human behavior” through technics; and finally, 3) the global phenomenon of increasing entropy due to a black-box economy of closed systems and/or industry consolidation

    From Novelty Detection to a Genetic Algorithm Optimized Classification for the Diagnosis of a SCADA-Equipped Complex Machine

    Get PDF
    In the field of Diagnostics, the fundamental task of detecting damage is basically a binary classification problem, which is addressed in many cases via Novelty Detection (ND): an observation is classified as novel if it differs significantly from reference, healthy data. ND is practically implemented summarizing a multivariate dataset with univariate distance information called Novelty Index. As many different approaches are possible to produce NIs, in this analysis, the possibility of implementing a simple classifier in a reduced-dimensionality space of NIs is studied. In addition to a simple decision-tree-like classification method, the process for obtaining the NIs can result as a dimension reduction method and, in turn, the NIs can be used for other classification algorithms. In addition, a case study will be analyzed thanks to the data published by the Prognostics and Health Management Europe (PHME) society, on the occasion of the Data Challenge 2021
    corecore