1 research outputs found

    Distance-Penalized Active Learning via Markov Decision Processes

    No full text
    We consider the problem of active learning in the context of spatial sampling, where the measurements are obtained by a mobile sampling unit. The goal is to localize the change point of a one-dimensional threshold classifier while minimizing the total sampling time, a function of both the cost of sampling and the distance traveled. In this paper, we present a general framework for active learning by modeling the search problem as a Markov decision process. Using this framework, we present time-optimal algorithms for the spatial sampling problem when there is a uniform prior on the change point, a known non-uniform prior on the change point, and a need to return to the origin for intermittent battery recharging. We demonstrate through simulations that our proposed algorithms significantly outperform existing methods while maintaining a low computational cost
    corecore