1 research outputs found
Detecting State Transitions of a Markov Source: Sampling Frequency and Age Trade-off
We consider a finite-state Discrete-Time Markov Chain (DTMC) source that can
be sampled for detecting the events when the DTMC transits to a new state. Our
goal is to study the trade-off between sampling frequency and staleness in
detecting the events. We argue that, for the problem at hand, using Age of
Information (AoI) for quantifying the staleness of a sample is conservative and
therefore, introduce \textit{age penalty} for this purpose. We study two
optimization problems: minimize average age penalty subject to an average
sampling frequency constraint, and minimize average sampling frequency subject
to an average age penalty constraint; both are Constrained Markov Decision
Problems. We solve them using linear programming approach and compute Markov
policies that are optimal among all causal policies. Our numerical results
demonstrate that the computed Markov policies not only outperform optimal
periodic sampling policies, but also achieve sampling frequencies close to or
lower than that of an optimal clairvoyant (non-causal) sampling policy, if a
small age penalty is allowed.Comment: 6 pages, published in IEEE INFOCOM AoI Workshop 202