research

Optimal Control of Partially Observable Piecewise Deterministic Markov Processes

Abstract

In this paper we consider a control problem for a Partially Observable Piecewise Deterministic Markov Process of the following type: After the jump of the process the controller receives a noisy signal about the state and the aim is to control the process continuously in time in such a way that the expected discounted cost of the system is minimized. We solve this optimization problem by reducing it to a discrete-time Markov Decision Process. This includes the derivation of a filter for the unobservable state. Imposing sufficient continuity and compactness assumptions we are able to prove the existence of optimal policies and show that the value function satisfies a fixed point equation. A generic application is given to illustrate the results

    Similar works

    Full text

    thumbnail-image

    Available Versions