82,982 research outputs found
Delay-Optimal User Scheduling and Inter-Cell Interference Management in Cellular Network via Distributive Stochastic Learning
In this paper, we propose a distributive queueaware intra-cell user
scheduling and inter-cell interference (ICI) management control design for a
delay-optimal celluar downlink system with M base stations (BSs), and K users
in each cell. Each BS has K downlink queues for K users respectively with
heterogeneous arrivals and delay requirements. The ICI management control is
adaptive to joint queue state information (QSI) over a slow time scale, while
the user scheduling control is adaptive to both the joint QSI and the joint
channel state information (CSI) over a faster time scale. We show that the
problem can be modeled as an infinite horizon average cost Partially Observed
Markov Decision Problem (POMDP), which is NP-hard in general. By exploiting the
special structure of the problem, we shall derive an equivalent Bellman
equation to solve the POMDP problem. To address the distributive requirement
and the issue of dimensionality and computation complexity, we derive a
distributive online stochastic learning algorithm, which only requires local
QSI and local CSI at each of the M BSs. We show that the proposed learning
algorithm converges almost surely (with probability 1) and has significant gain
compared with various baselines. The proposed solution only has linear
complexity order O(MK)
Two Timescale Convergent Q-learning for Sleep--Scheduling in Wireless Sensor Networks
In this paper, we consider an intrusion detection application for Wireless
Sensor Networks (WSNs). We study the problem of scheduling the sleep times of
the individual sensors to maximize the network lifetime while keeping the
tracking error to a minimum. We formulate this problem as a
partially-observable Markov decision process (POMDP) with continuous
state-action spaces, in a manner similar to (Fuemmeler and Veeravalli [2008]).
However, unlike their formulation, we consider infinite horizon discounted and
average cost objectives as performance criteria. For each criterion, we propose
a convergent on-policy Q-learning algorithm that operates on two timescales,
while employing function approximation to handle the curse of dimensionality
associated with the underlying POMDP. Our proposed algorithm incorporates a
policy gradient update using a one-simulation simultaneous perturbation
stochastic approximation (SPSA) estimate on the faster timescale, while the
Q-value parameter (arising from a linear function approximation for the
Q-values) is updated in an on-policy temporal difference (TD) algorithm-like
fashion on the slower timescale. The feature selection scheme employed in each
of our algorithms manages the energy and tracking components in a manner that
assists the search for the optimal sleep-scheduling policy. For the sake of
comparison, in both discounted and average settings, we also develop a function
approximation analogue of the Q-learning algorithm. This algorithm, unlike the
two-timescale variant, does not possess theoretical convergence guarantees.
Finally, we also adapt our algorithms to include a stochastic iterative
estimation scheme for the intruder's mobility model. Our simulation results on
a 2-dimensional network setting suggest that our algorithms result in better
tracking accuracy at the cost of only a few additional sensors, in comparison
to a recent prior work
- …