76 research outputs found
Context-aware Status Updating: Wireless Scheduling for Maximizing Situational Awareness in Safety-critical Systems
In this study, we investigate a context-aware status updating system
consisting of multiple sensor-estimator pairs. A centralized monitor pulls
status updates from multiple sensors that are monitoring several
safety-critical situations (e.g., carbon monoxide density in forest fire
detection, machine safety in industrial automation, and road safety). Based on
the received sensor updates, multiple estimators determine the current
safety-critical situations. Due to transmission errors and limited
communication resources, the sensor updates may not be timely, resulting in the
possibility of misunderstanding the current situation. In particular, if a
dangerous situation is misinterpreted as safe, the safety risk is high. In this
paper, we introduce a novel framework that quantifies the penalty due to the
unawareness of a potentially dangerous situation. This situation-unaware
penalty function depends on two key factors: the Age of Information (AoI) and
the observed signal value. For optimal estimators, we provide an
information-theoretic bound of the penalty function that evaluates the
fundamental performance limit of the system. To minimize the penalty, we study
a pull-based multi-sensor, multi-channel transmission scheduling problem. Our
analysis reveals that for optimal estimators, it is always beneficial to keep
the channels busy. Due to communication resource constraints, the scheduling
problem can be modelled as a Restless Multi-armed Bandit (RMAB) problem. By
utilizing relaxation and Lagrangian decomposition of the RMAB, we provide a
low-complexity scheduling algorithm which is asymptotically optimal. Our
results hold for both reliable and unreliable channels. Numerical evidence
shows that our scheduling policy can achieve up to 100 times performance gain
over periodic updating and up to 10 times over randomized policy.Comment: 7 pages, 4 figures, part of this manuscript has been accepted by IEEE
MILCOM 2023 Workshop on QuAVo
Learning and Communications Co-Design for Remote Inference Systems: Feature Length Selection and Transmission Scheduling
In this paper, we consider a remote inference system, where a neural network
is used to infer a time-varying target (e.g., robot movement), based on
features (e.g., video clips) that are progressively received from a sensing
node (e.g., a camera). Each feature is a temporal sequence of sensory data. The
learning performance of the system is determined by (i) the timeliness and (ii)
the temporal sequence length of the features, where we use Age of Information
(AoI) as a metric for timeliness. While a longer feature can typically provide
better learning performance, it often requires more channel resources for
sending the feature. To minimize the time-averaged inference error, we study a
learning and communication co-design problem that jointly optimizes feature
length selection and transmission scheduling. When there is a single
sensor-predictor pair and a single channel, we develop low-complexity optimal
co-designs for both the cases of time-invariant and time-variant feature
length. When there are multiple sensor-predictor pairs and multiple channels,
the co-design problem becomes a restless multi-arm multi-action bandit problem
that is PSPACE-hard. For this setting, we design a low-complexity algorithm to
solve the problem. Trace-driven evaluations suggest that the proposed
co-designs can significantly reduce the time-averaged inference error of remote
inference systems.Comment: 41 pages, 8 figures. The manuscript has been submitted to IEEE
Journal on Selected Areas in Information Theor
Uncertainty-of-Information Scheduling: A Restless Multi-armed Bandit Framework
This paper proposes using the uncertainty of information (UoI), measured by
Shannon's entropy, as a metric for information freshness. We consider a system
in which a central monitor observes multiple binary Markov processes through a
communication channel. The UoI of a Markov process corresponds to the monitor's
uncertainty about its state. At each time step, only one Markov process can be
selected to update its state to the monitor; hence there is a tradeoff among
the UoIs of the processes that depend on the scheduling policy used to select
the process to be updated. The age of information (AoI) of a process
corresponds to the time since its last update. In general, the associated UoI
can be a non-increasing function, or even an oscillating function, of its AoI,
making the scheduling problem particularly challenging. This paper investigates
scheduling policies that aim to minimize the average sum-UoI of the processes
over the infinite time horizon. We formulate the problem as a restless
multi-armed bandit (RMAB) problem, and develop a Whittle index policy that is
near-optimal for the RMAB after proving its indexability. We further provide an
iterative algorithm to compute the Whittle index for the practical deployment
of the policy. Although this paper focuses on UoI scheduling, our results apply
to a general class of RMABs for which the UoI scheduling problem is a special
case. Specifically, this paper's Whittle index policy is valid for any RMAB in
which the bandits are binary Markov processes and the penalty is a concave
function of the belief state of the Markov process. Numerical results
demonstrate the excellent performance of the Whittle index policy for this
class of RMABs.Comment: 28 pages, 5 figure
An Index Policy for Minimizing the Uncertainty-of-Information of Markov Sources
This paper focuses on the information freshness of finite-state Markov
sources, using the uncertainty of information (UoI) as the performance metric.
Measured by Shannon's entropy, UoI can capture not only the transition dynamics
of the Markov source but also the different evolutions of information quality
caused by the different values of the last observation. We consider an
information update system with M finite-state Markov sources transmitting
information to a remote monitor via m communication channels. Our goal is to
explore the optimal scheduling policy to minimize the sum-UoI of the Markov
sources. The problem is formulated as a restless multi-armed bandit (RMAB). We
relax the RMAB and then decouple the relaxed problem into M single bandit
problems. Analyzing the single bandit problem provides useful properties with
which the relaxed problem reduces to maximizing a concave and piecewise linear
function, allowing us to develop a gradient method to solve the relaxed problem
and obtain its optimal policy. By rounding up the optimal policy for the
relaxed problem, we obtain an index policy for the original RMAB problem.
Notably, the proposed index policy is universal in the sense that it applies to
general RMABs with bounded cost functions.Comment: 55 page
- …