3 research outputs found
Asymptotically Optimal Sampling Policy for Quickest Change Detection with Observation-Switching Cost
We consider the problem of quickest change detection (QCD) in a signal where
its observations are obtained using a set of actions, and switching from one
action to another comes with a cost. The objective is to design a stopping rule
consisting of a sampling policy to determine the sequence of actions used to
observe the signal and a stopping time to quickly detect for the change,
subject to a constraint on the average observation-switching cost. We propose
an open-loop sampling policy of finite window size and a generalized likelihood
ratio (GLR) Cumulative Sum (CuSum) stopping time for the QCD problem. We show
that the GLR CuSum stopping time is asymptotically optimal with a properly
designed sampling policy and formulate the design of this sampling policy as a
quadratic programming problem. We prove that it is sufficient to consider
policies of window size not more than one when designing policies of finite
window size and propose several algorithms that solve this optimization problem
with theoretical guarantees. For observation-dependent policies, we propose a
-threshold stopping time and an observation-dependent sampling policy. We
present a method to design the observation-dependent sampling policy based on
open-loop sampling policies. Finally, we apply our approach to the problem of
QCD of a partially observed graph signal and empirically demonstrate the
performance of our proposed stopping times