136 research outputs found

    Optimal control and optimal sensor activation for Markov decision problems with costly observations

    Get PDF
    This paper considers partial observation Markov decision processes. Besides the classical control decisions influencing the transition probabilities of the Markov process, we also consider control actions that can activate the sensors to provide more or less accurate information about the system state, explicitly including the cost of activating sensors. We synthesize control laws that minimize a discounted operating cost of the system over an infinite interval of time, where the instantaneous cost function depends on the current state, the control influencing the transition probabilities, and the control actions activating the sensors. A general computationally efficient optimal solution for this problem is not known. Hence we design supoptimal controllers that only use knowledge of the value function for the full state information Markov decision problem. Our solution guarantees that the discounted cost of operating the plant increases only by a bounded amount with respect to the minimal cost for the full state information problem. A new concept of pinned conditional distributions of the state given the observed history of the plant is required in order to implement these control laws online
    corecore