50 research outputs found

    Approximations for partially observed Markov decision processes

    No full text
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.This chapter studies the finite-model approximation of discrete-time partially observed Markov decision process. We will find that by performing the standard reduction method, where one transforms a partially observed model to a belief-based fully observed model, we can apply and properly generalize the results in the preceding chapters to obtain approximation results. The versatility of approximation results under weak continuity conditions become particularly evident while investigating the applicability of these results to the partially observed case. We also provide systematic procedures for the quantization of the set of probability measures on the state space of POMDPs which is the state space of belief-MDPs
    corecore