The conventional model for online planning under uncertainty assumes that an
agent can stop and plan without incurring costs for the time spent planning.
However, planning time is not free in most real-world settings. For example, an
autonomous drone is subject to nature's forces, like gravity, even while it
thinks, and must either pay a price for counteracting these forces to stay in
place, or grapple with the state change caused by acquiescing to them. Policy
optimization in these settings requires metareasoning---a process that trades
off the cost of planning and the potential policy improvement that can be
achieved. We formalize and analyze the metareasoning problem for Markov
Decision Processes (MDPs). Our work subsumes previously studied special cases
of metareasoning and shows that in the general case, metareasoning is at most
polynomially harder than solving MDPs with any given algorithm that disregards
the cost of thinking. For reasons we discuss, optimal general metareasoning
turns out to be impractical, motivating approximations. We present approximate
metareasoning procedures which rely on special properties of the BRTDP planning
algorithm and explore the effectiveness of our methods on a variety of
problems.Comment: Extended version of IJCAI 2015 pape