448 research outputs found
Dynamically optimal treatment allocation using Reinforcement Learning
Devising guidance on how to assign individuals to treatment is an important
goal in empirical research. In practice, individuals often arrive sequentially,
and the planner faces various constraints such as limited budget/capacity, or
borrowing constraints, or the need to place people in a queue. For instance, a
governmental body may receive a budget outlay at the beginning of a year, and
it may need to decide how best to allocate resources within the year to
individuals who arrive sequentially. In this and other examples involving
inter-temporal trade-offs, previous work on devising optimal policy rules in a
static context is either not applicable, or sub-optimal. Here we show how one
can use offline observational data to estimate an optimal policy rule that
maximizes expected welfare in this dynamic context. We allow the class of
policy rules to be restricted for legal, ethical or incentive compatibility
reasons. The problem is equivalent to one of optimal control under a
constrained policy class, and we exploit recent developments in Reinforcement
Learning (RL) to propose an algorithm to solve this. The algorithm is easily
implementable with speedups achieved through multiple RL agents learning in
parallel processes. We also characterize the statistical regret from using our
estimated policy rule by casting the evolution of the value function under each
policy in a Partial Differential Equation (PDE) form and using the theory of
viscosity solutions to PDEs. We find that the policy regret decays at a
rate in most examples; this is the same rate as in the static case.Comment: 67 page
DoMo-AC: Doubly Multi-step Off-policy Actor-Critic Algorithm
Multi-step learning applies lookahead over multiple time steps and has proved
valuable in policy evaluation settings. However, in the optimal control case,
the impact of multi-step learning has been relatively limited despite a number
of prior efforts. Fundamentally, this might be because multi-step policy
improvements require operations that cannot be approximated by stochastic
samples, hence hindering the widespread adoption of such methods in practice.
To address such limitations, we introduce doubly multi-step off-policy VI
(DoMo-VI), a novel oracle algorithm that combines multi-step policy
improvements and policy evaluations. DoMo-VI enjoys guaranteed convergence
speed-up to the optimal policy and is applicable in general off-policy learning
settings. We then propose doubly multi-step off-policy actor-critic (DoMo-AC),
a practical instantiation of the DoMo-VI algorithm. DoMo-AC introduces a
bias-variance trade-off that ensures improved policy gradient estimates. When
combined with the IMPALA architecture, DoMo-AC has showed improvements over the
baseline algorithm on Atari-57 game benchmarks
- …