1,743 research outputs found
Optimal Causal Rate-Constrained Sampling for a Class of Continuous Markov Processes
Consider the following communication scenario. An encoder observes a stochastic process and causally decides when and what to transmit about it, under a constraint on bits transmitted per second. A decoder uses the received codewords to causally estimate the process in real time. The encoder and the decoder are synchronized in time. We aim to find the optimal encoding and decoding policies that minimize the end-to-end estimation mean-square error under the rate constraint. For a class of continuous Markov processes satisfying regularity conditions, we show that the optimal encoding policy transmits a 1-bit codeword once the process innovation passes one of two thresholds. The optimal decoder noiselessly recovers the last sample from the 1-bit codewords and codeword-generating time stamps, and uses it as the running estimate of the current process, until the next codeword arrives. In particular, we show the optimal causal code for the Ornstein-Uhlenbeck process and calculate its distortion-rate function
Goal-oriented Estimation of Multiple Markov Sources in Resource-constrained Systems
This paper investigates goal-oriented communication for remote estimation of
multiple Markov sources in resource-constrained networks. An agent selects the
update order of the sources and transmits the packet to a remote destination
over an unreliable delay channel. The destination is tasked with source
reconstruction for the purpose of actuation. We utilize the metric cost of
actuation error (CAE) to capture the significance (semantics) of error at the
point of actuation. We aim to find an optimal sampling policy that minimizes
the time-averaged CAE subject to average resource constraints. We formulate
this problem as an average-cost constrained Markov Decision Process (CMDP) and
transform it into an unconstrained MDP by utilizing Lyapunov drift techniques.
Then, we propose a low-complexity drift-plus-penalty(DPP) policy for systems
with known source/channel statistics and a Lyapunov optimization-based deep
reinforcement learning (LO-DRL) policy for unknown environments. Our policies
achieve near-optimal performance in CAE minimization and significantly reduce
the number of uninformative transmissions
Optimal Causal Rate-Constrained Sampling for a Class of Continuous Markov Processes
Consider the following communication scenario. An encoder observes a stochastic process and causally decides when and what to transmit about it, under a constraint on bits transmitted per second. A decoder uses the received codewords to causally estimate the process in real time. The encoder and the decoder are synchronized in time. We aim to find the optimal encoding and decoding policies that minimize the end-to-end estimation mean-square error under the rate constraint. For a class of continuous Markov processes satisfying regularity conditions, we show that the optimal encoding policy transmits a 1-bit codeword once the process innovation passes one of two thresholds. The optimal decoder noiselessly recovers the last sample from the 1-bit codewords and codeword-generating time stamps, and uses it as the running estimate of the current process, until the next codeword arrives. In particular, we show the optimal causal code for the Ornstein-Uhlenbeck process and calculate its distortion-rate function
- …