Skip to main content
Article thumbnail
Location of Repository

Dual Representations for Dynamic Programming

By Tao Wang, Daniel Lizotte, Michael Bowling and Dale Schuurmans


We propose a dual approach to dynamic programming and reinforcement learning, based on maintaining an explicit representation of visit distributions as opposed to value functions. An advantage of working in the dual is that it allows one to exploit techniques for representing, approximating, and estimating probability distributions, while also avoiding any risk of divergence. We begin by formulating a modified dual of the standard linear program that guarantees the solution is a globally normalized visit distribution. Using this alternative representation, we then derive dual forms of dynamic programming, including on-policy updating, policy improvement and off-policy updating, and furthermore show how to incorporate function approximation. We then investigate the convergence properties of these algorithms, both theoretically and empirically, and show that the dual approach remains stable in situations when primal value function approximation diverges. Overall, the dual approach offers a viable alternative to standard dynamic programming techniques and offers new avenues for developing algorithms for sequential decision making

Topics: Sequential Decision Making, Dynamic Programming, Convergence, Approximation
Year: 2008
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.