6,957 research outputs found
Taxonomy of pathways to dangerous artificial intelligence
In order to properly handle a dangerous Artificially Intelligent (AI) system it is important to understand how the system came to be in such a state. In popular culture (science fiction movies/books) AIs/Robots became self-aware and as a result rebel against humanity and decide to destroy it. While it is one possible scenario, it is probably the least likely path to appearance of dangerous AI. In this work, we survey, classify and analyze a number of circumstances, which might lead to arrival of malicious AI. To the best of our knowledge, this is the first attempt to systematically classify types of pathways leading to malevolent AI. Previous relevant work either surveyed specific goals/meta-rules which might lead to malevolent behavior in Als (Özkural 2014) or reviewed specific undesirable behaviors AGIs can exhibit at different stages of its development (Turchin July 10 2015a, Turchin July 10, 2015b)
Emergence of Addictive Behaviors in Reinforcement Learning Agents
This paper presents a novel approach to the technical analysis of wireheading
in intelligent agents. Inspired by the natural analogues of wireheading and
their prevalent manifestations, we propose the modeling of such phenomenon in
Reinforcement Learning (RL) agents as psychological disorders. In a preliminary
step towards evaluating this proposal, we study the feasibility and dynamics of
emergent addictive policies in Q-learning agents in the tractable environment
of the game of Snake. We consider a slightly modified settings for this game,
in which the environment provides a "drug" seed alongside the original
"healthy" seed for the consumption of the snake. We adopt and extend an
RL-based model of natural addiction to Q-learning agents in this settings, and
derive sufficient parametric conditions for the emergence of addictive
behaviors in such agents. Furthermore, we evaluate our theoretical analysis
with three sets of simulation-based experiments. The results demonstrate the
feasibility of addictive wireheading in RL agents, and provide promising venues
of further research on the psychopathological modeling of complex AI safety
problems
Identical topics in Mandarin Chinese and Shanghainese
Identical topic (IT henceforth) was previously known as copying topic (Xu & Liu (1998:141-157). It is fully or partially identical to a corresponding element (CE henceforth) occurring in the following part of the clause. Broadly speaking, IT is semantically empty. Being an unusual type of adding, it properly falls into the central concern of this volume.
It seems IT can be attested in all Chinese dialects, though the phenomena in question have been poorly documented and have scarcely been studied under a unified category. IT seems to be a better candidate to characterise topic prominent languages than many other topic types including the non-gap topic, which has long been called "Chinese style topic" since Chafe (1976) and has been viewed as a major characteristic of topic prominent languages (e.g., Li & Thompson, 1976, Xu & Langendoen 1985, Gasde 1999). I believe the study of IT structure is necessary to obtain a clearer and more complete picture of topic structure in general. As far as I know, Wu dialects of Chinese, including Shanghainese, are the ones which have the richest IT types and the greatest text frequency of IT. Therefore, this study will be based on both Mandarin and Shanghainese data
Generative Design in Minecraft (GDMC), Settlement Generation Competition
This paper introduces the settlement generation competition for Minecraft,
the first part of the Generative Design in Minecraft challenge. The settlement
generation competition is about creating Artificial Intelligence (AI) agents
that can produce functional, aesthetically appealing and believable settlements
adapted to a given Minecraft map - ideally at a level that can compete with
human created designs. The aim of the competition is to advance procedural
content generation for games, especially in overcoming the challenges of
adaptive and holistic PCG. The paper introduces the technical details of the
challenge, but mostly focuses on what challenges this competition provides and
why they are scientifically relevant.Comment: 10 pages, 5 figures, Part of the Foundations of Digital Games 2018
proceedings, as part of the workshop on Procedural Content Generatio
Risk assessment at AGI companies: A review of popular risk assessment techniques from other safety-critical industries
Companies like OpenAI, Google DeepMind, and Anthropic have the stated goal of
building artificial general intelligence (AGI) - AI systems that perform as
well as or better than humans on a wide variety of cognitive tasks. However,
there are increasing concerns that AGI would pose catastrophic risks. In light
of this, AGI companies need to drastically improve their risk management
practices. To support such efforts, this paper reviews popular risk assessment
techniques from other safety-critical industries and suggests ways in which AGI
companies could use them to assess catastrophic risks from AI. The paper
discusses three risk identification techniques (scenario analysis, fishbone
method, and risk typologies and taxonomies), five risk analysis techniques
(causal mapping, Delphi technique, cross-impact analysis, bow tie analysis, and
system-theoretic process analysis), and two risk evaluation techniques
(checklists and risk matrices). For each of them, the paper explains how they
work, suggests ways in which AGI companies could use them, discusses their
benefits and limitations, and makes recommendations. Finally, the paper
discusses when to conduct risk assessments, when to use which technique, and
how to use any of them. The reviewed techniques will be obvious to risk
management professionals in other industries. And they will not be sufficient
to assess catastrophic risks from AI. However, AGI companies should not skip
the straightforward step of reviewing best practices from other industries.Comment: 44 pages, 13 figures, 9 table
- …