4 research outputs found
Nonparametric General Reinforcement Learning
Reinforcement learning problems are often phrased in terms of
Markov decision processes (MDPs). In this thesis we go beyond
MDPs and consider reinforcement learning in environments that are
non-Markovian, non-ergodic and only partially observable. Our
focus is not on practical algorithms, but rather on the
fundamental underlying problems: How do we balance exploration
and exploitation? How do we explore optimally? When is an agent
optimal? We follow the nonparametric realizable paradigm: we
assume the data is drawn from an unknown source that belongs to a
known countable class of candidates.
First, we consider the passive (sequence prediction) setting,
learning from data that is not independent and identically
distributed. We collect results from artificial intelligence,
algorithmic information theory, and game theory and put them in a
reinforcement learning context: they demonstrate how an agent can
learn the value of its own policy.
Next, we establish negative results on Bayesian reinforcement
learning agents, in particular AIXI. We show that unlucky or
adversarial choices of the prior cause the agent to misbehave
drastically. Therefore Legg-Hutter intelligence and balanced
Pareto optimality, which depend crucially on the choice of the
prior, are entirely subjective. Moreover, in the class of all
computable environments every policy is Pareto optimal. This
undermines all existing optimality properties for AIXI.
However, there are Bayesian approaches to general reinforcement
learning that satisfy objective optimality guarantees: We prove
that Thompson sampling
is asymptotically optimal in stochastic environments in the sense
that its value converges to the value of the optimal policy. We
connect asymptotic optimality to regret
given a recoverability assumption on the environment that allows
the agent to recover from mistakes. Hence Thompson sampling
achieves sublinear regret in these environments.
AIXI is known to be incomputable. We quantify this using the
arithmetical hierarchy, and establish upper and corresponding
lower bounds for incomputability. Further, we show that AIXI is
not limit computable, thus cannot be approximated using finite
computation. However there are limit computable ε-optimal
approximations to AIXI. We also derive computability bounds for
knowledge-seeking agents, and give a limit computable weakly
asymptotically optimal reinforcement learning agent.
Finally, our results culminate in a formal solution to the grain
of truth problem: A Bayesian agent acting in a multi-agent
environment learns to predict the other agents' policies if its
prior assigns positive probability to them (the prior contains a
grain of truth). We construct a large but limit computable class
containing a grain of truth
and show that agents based on Thompson sampling over this class
converge to play ε-Nash equilibria in arbitrary unknown
computable multi-agent environments
Nonparametric General Reinforcement Learning
Reinforcement learning problems are often phrased in terms of
Markov decision processes (MDPs). In this thesis we go beyond
MDPs and consider reinforcement learning in environments that are
non-Markovian, non-ergodic and only partially observable. Our
focus is not on practical algorithms, but rather on the
fundamental underlying problems: How do we balance exploration
and exploitation? How do we explore optimally? When is an agent
optimal? We follow the nonparametric realizable paradigm: we
assume the data is drawn from an unknown source that belongs to a
known countable class of candidates.
First, we consider the passive (sequence prediction) setting,
learning from data that is not independent and identically
distributed. We collect results from artificial intelligence,
algorithmic information theory, and game theory and put them in a
reinforcement learning context: they demonstrate how an agent can
learn the value of its own policy.
Next, we establish negative results on Bayesian reinforcement
learning agents, in particular AIXI. We show that unlucky or
adversarial choices of the prior cause the agent to misbehave
drastically. Therefore Legg-Hutter intelligence and balanced
Pareto optimality, which depend crucially on the choice of the
prior, are entirely subjective. Moreover, in the class of all
computable environments every policy is Pareto optimal. This
undermines all existing optimality properties for AIXI.
However, there are Bayesian approaches to general reinforcement
learning that satisfy objective optimality guarantees: We prove
that Thompson sampling
is asymptotically optimal in stochastic environments in the sense
that its value converges to the value of the optimal policy. We
connect asymptotic optimality to regret
given a recoverability assumption on the environment that allows
the agent to recover from mistakes. Hence Thompson sampling
achieves sublinear regret in these environments.
AIXI is known to be incomputable. We quantify this using the
arithmetical hierarchy, and establish upper and corresponding
lower bounds for incomputability. Further, we show that AIXI is
not limit computable, thus cannot be approximated using finite
computation. However there are limit computable ε-optimal
approximations to AIXI. We also derive computability bounds for
knowledge-seeking agents, and give a limit computable weakly
asymptotically optimal reinforcement learning agent.
Finally, our results culminate in a formal solution to the grain
of truth problem: A Bayesian agent acting in a multi-agent
environment learns to predict the other agents' policies if its
prior assigns positive probability to them (the prior contains a
grain of truth). We construct a large but limit computable class
containing a grain of truth
and show that agents based on Thompson sampling over this class
converge to play ε-Nash equilibria in arbitrary unknown
computable multi-agent environments