Reinforcement learning presents an attractive paradigm to reason about
several distinct aspects of sequential decision making, such as specifying
complex goals, planning future observations and actions, and critiquing their
utilities. However, the combined integration of these capabilities poses
competing algorithmic challenges in retaining maximal expressivity while
allowing for flexibility in modeling choices for efficient learning and
inference. We present Decision Stacks, a generative framework that decomposes
goal-conditioned policy agents into 3 generative modules. These modules
simulate the temporal evolution of observations, rewards, and actions via
independent generative models that can be learned in parallel via teacher
forcing. Our framework guarantees both expressivity and flexibility in
designing individual modules to account for key factors such as architectural
bias, optimization objective and dynamics, transferrability across domains, and
inference speed. Our empirical results demonstrate the effectiveness of
Decision Stacks for offline policy optimization for several MDP and POMDP
environments, outperforming existing methods and enabling flexible generative
decision making.Comment: published at NeurIPS 2023, project page:
https://siyan-zhao.github.io/decision-stacks