11 research outputs found
Variational Bayesian Reinforcement Learning with Regret Bounds
We consider the exploration-exploitation trade-off in reinforcement learning
and we show that an agent imbued with an epistemic-risk-seeking utility
function is able to explore efficiently, as measured by regret. The parameter
that controls how risk-seeking the agent is can be optimized to minimize
regret, or annealed according to a schedule. We call the resulting algorithm
K-learning and we show that the K-values that the agent maintains are
optimistic for the expected optimal Q-values at each state-action pair. The
utility function approach induces a natural Boltzmann exploration policy for
which the 'temperature' parameter is equal to the risk-seeking parameter. This
policy achieves a Bayesian regret bound of ,
where L is the time horizon, S is the number of states, A is the number of
actions, and T is the total number of elapsed time-steps. K-learning can be
interpreted as mirror descent in the policy space, and it is similar to other
well-known methods in the literature, including Q-learning, soft-Q-learning,
and maximum entropy policy gradient. K-learning is simple to implement, as it
only requires adding a bonus to the reward at each state-action and then
solving a Bellman equation. We conclude with a numerical example demonstrating
that K-learning is competitive with other state-of-the-art algorithms in
practice
Did we personalize? Assessing personalization by an online reinforcement learning algorithm using resampling
There is a growing interest in using reinforcement learning (RL) to
personalize sequences of treatments in digital health to support users in
adopting healthier behaviors. Such sequential decision-making problems involve
decisions about when to treat and how to treat based on the user's context
(e.g., prior activity level, location, etc.). Online RL is a promising
data-driven approach for this problem as it learns based on each user's
historical responses and uses that knowledge to personalize these decisions.
However, to decide whether the RL algorithm should be included in an
``optimized'' intervention for real-world deployment, we must assess the data
evidence indicating that the RL algorithm is actually personalizing the
treatments to its users. Due to the stochasticity in the RL algorithm, one may
get a false impression that it is learning in certain states and using this
learning to provide specific treatments. We use a working definition of
personalization and introduce a resampling-based methodology for investigating
whether the personalization exhibited by the RL algorithm is an artifact of the
RL algorithm stochasticity. We illustrate our methodology with a case study by
analyzing the data from a physical activity clinical trial called HeartSteps,
which included the use of an online RL algorithm. We demonstrate how our
approach enhances data-driven truth-in-advertising of algorithm personalization
both across all users as well as within specific users in the study.Comment: The first two authors contributed equall