Partially observable Markov decision processes (POMDPs) provide an elegant
mathematical framework for modeling complex decision and planning problems in
stochastic domains in which states of the system are observable only
indirectly, via a set of imperfect or noisy observations. The modeling
advantage of POMDPs, however, comes at a price -- exact methods for solving
them are computationally very expensive and thus applicable in practice only to
very simple problems. We focus on efficient approximation (heuristic) methods
that attempt to alleviate the computational problem and trade off accuracy for
speed. We have two objectives here. First, we survey various approximation
methods, analyze their properties and relations and provide some new insights
into their differences. Second, we present a number of new approximation
methods and novel refinements of existing techniques. The theoretical results
are supported by experiments on a problem from the agent navigation domain