Model-based reinforcement learning (MBRL) has been a primary approach to
ameliorating the sample efficiency issue as well as to make a generalist agent.
However, there has not been much effort toward enhancing the strategy of
dreaming itself. Therefore, it is a question whether and how an agent can
"dream better" in a more structured and strategic way. In this paper, inspired
by the observation from cognitive science suggesting that humans use a spatial
divide-and-conquer strategy in planning, we propose a new MBRL agent, called
Dr. Strategy, which is equipped with a novel Dreaming Strategy. The proposed
agent realizes a version of divide-and-conquer-like strategy in dreaming. This
is achieved by learning a set of latent landmarks and then utilizing these to
learn a landmark-conditioned highway policy. With the highway policy, the agent
can first learn in the dream to move to a landmark, and from there it tackles
the exploration and achievement task in a more focused way. In experiments, we
show that the proposed model outperforms prior pixel-based MBRL methods in
various visually complex and partially observable navigation tasks.Comment: First two authors contributed equall