6,209 research outputs found
Self-Optimizing and Pareto-Optimal Policies in General Environments based on Bayes-Mixtures
The problem of making sequential decisions in unknown probabilistic
environments is studied. In cycle action results in perception
and reward , where all quantities in general may depend on the complete
history. The perception and reward are sampled from the (reactive)
environmental probability distribution . This very general setting
includes, but is not limited to, (partial observable, k-th order) Markov
decision processes. Sequential decision theory tells us how to act in order to
maximize the total expected reward, called value, if is known.
Reinforcement learning is usually used if is unknown. In the Bayesian
approach one defines a mixture distribution as a weighted sum of
distributions \nu\in\M, where \M is any class of distributions including
the true environment . We show that the Bayes-optimal policy based
on the mixture is self-optimizing in the sense that the average value
converges asymptotically for all \mu\in\M to the optimal value achieved by
the (infeasible) Bayes-optimal policy which knows in advance. We
show that the necessary condition that \M admits self-optimizing policies at
all, is also sufficient. No other structural assumptions are made on \M. As
an example application, we discuss ergodic Markov decision processes, which
allow for self-optimizing policies. Furthermore, we show that is
Pareto-optimal in the sense that there is no other policy yielding higher or
equal value in {\em all} environments \nu\in\M and a strictly higher value in
at least one.Comment: 15 page
Recommended from our members
Towards Informed Exploration for Deep Reinforcement Learning
In this thesis, we discuss various techniques for improving exploration for deep reinforcement learning. We begin with a brief review of reinforcement learning (RL) and the fundamental v.s. exploitation trade-off. Then we review how deep RL has improved upon classical and summarize six categories of the latest exploration methods for deep RL, in the order increasing usage of prior information. We then explore representative works in three categories discuss their strengths and weaknesses. The first category, represented by Soft Q-learning, uses regularization to encourage exploration. The second category, represented by count-based via hashing, maps states to hash codes for counting and assigns higher exploration to less-encountered states. The third category utilizes hierarchy and is represented by modular architecture for RL agents to play StarCraft II. Finally, we conclude that exploration by prior knowledge is a promising research direction and suggest topics of potentially impact
Geometry of Policy Improvement
We investigate the geometry of optimal memoryless time independent decision
making in relation to the amount of information that the acting agent has about
the state of the system. We show that the expected long term reward, discounted
or per time step, is maximized by policies that randomize among at most
actions whenever at most world states are consistent with the agent's
observation. Moreover, we show that the expected reward per time step can be
studied in terms of the expected discounted reward. Our main tool is a
geometric version of the policy improvement lemma, which identifies a
polyhedral cone of policy changes in which the state value function increases
for all states.Comment: 8 page
- …