40 research outputs found
Avoiding Wireheading with Value Reinforcement Learning
How can we design good goals for arbitrarily intelligent agents?
Reinforcement learning (RL) is a natural approach. Unfortunately, RL does not
work well for generally intelligent agents, as RL agents are incentivised to
shortcut the reward sensor for maximum reward -- the so-called wireheading
problem. In this paper we suggest an alternative to RL called value
reinforcement learning (VRL). In VRL, agents use the reward signal to learn a
utility function. The VRL setup allows us to remove the incentive to wirehead
by placing a constraint on the agent's actions. The constraint is defined in
terms of the agent's belief distributions, and does not require an explicit
specification of which actions constitute wireheading.Comment: Artificial General Intelligence (AGI) 201
Emergence of Addictive Behaviors in Reinforcement Learning Agents
This paper presents a novel approach to the technical analysis of wireheading
in intelligent agents. Inspired by the natural analogues of wireheading and
their prevalent manifestations, we propose the modeling of such phenomenon in
Reinforcement Learning (RL) agents as psychological disorders. In a preliminary
step towards evaluating this proposal, we study the feasibility and dynamics of
emergent addictive policies in Q-learning agents in the tractable environment
of the game of Snake. We consider a slightly modified settings for this game,
in which the environment provides a "drug" seed alongside the original
"healthy" seed for the consumption of the snake. We adopt and extend an
RL-based model of natural addiction to Q-learning agents in this settings, and
derive sufficient parametric conditions for the emergence of addictive
behaviors in such agents. Furthermore, we evaluate our theoretical analysis
with three sets of simulation-based experiments. The results demonstrate the
feasibility of addictive wireheading in RL agents, and provide promising venues
of further research on the psychopathological modeling of complex AI safety
problems
Self-Modification of Policy and Utility Function in Rational Agents
Any agent that is part of the environment it interacts with and has versatile
actuators (such as arms and fingers), will in principle have the ability to
self-modify -- for example by changing its own source code. As we continue to
create more and more intelligent agents, chances increase that they will learn
about this ability. The question is: will they want to use it? For example,
highly intelligent systems may find ways to change their goals to something
more easily achievable, thereby `escaping' the control of their designers. In
an important paper, Omohundro (2008) argued that goal preservation is a
fundamental drive of any intelligent system, since a goal is more likely to be
achieved if future versions of the agent strive towards the same goal. In this
paper, we formalise this argument in general reinforcement learning, and
explore situations where it fails. Our conclusion is that the self-modification
possibility is harmless if and only if the value function of the agent
anticipates the consequences of self-modifications and use the current utility
function when evaluating the future.Comment: Artificial General Intelligence (AGI) 201
Personal Universes: A Solution to the Multi-Agent Value Alignment Problem
AI Safety researchers attempting to align values of highly capable intelligent systems with those of humanity face a number of challenges including personal value extraction, multi-agent value merger and finally in-silico encoding. State-of-the-art research in value alignment shows difficulties in every stage in this process, but merger of incompatible preferences is a particularly difficult challenge to overcome. In this paper we assume that the value extraction problem will be solved and propose a possible way to implement an AI solution which optimally aligns with individual preferences of each user. We conclude by analyzing benefits and limitations of the proposed approach