8 research outputs found
Online Reinforcement Learning of X-Haul Content Delivery Mode in Fog Radio Access Networks
We consider a Fog Radio Access Network (F-RAN) with a Base Band Unit (BBU) in
the cloud and multiple cache-enabled enhanced Remote Radio Heads (eRRHs). The
system aims at delivering contents on demand with minimal average latency from
a time-varying library of popular contents. Information about uncached
requested files can be transferred from the cloud to the eRRHs by following
either backhaul or fronthaul modes. The backhaul mode transfers fractions of
the requested files, while the fronthaul mode transmits quantized baseband
samples as in Cloud-RAN (C-RAN). The backhaul mode allows the caches of the
eRRHs to be updated, which may lower future delivery latencies. In contrast,
the fronthaul mode enables cooperative C-RAN transmissions that may reduce the
current delivery latency. Taking into account the trade-off between current and
future delivery performance, this paper proposes an adaptive selection method
between the two delivery modes to minimize the long-term delivery latency.
Assuming an unknown and time-varying popularity model, the method is based on
model-free Reinforcement Learning (RL). Numerical results confirm the
effectiveness of the proposed RL scheme.Comment: 5 pages, 2 figure
Online Reinforcement Learning of X-Haul Content Delivery Mode in Fog Radio Access Networks
We consider a Fog Radio Access Network (F-RAN) with a Base Band Unit (BBU) in
the cloud and multiple cache-enabled enhanced Remote Radio Heads (eRRHs). The
system aims at delivering contents on demand with minimal average latency from
a time-varying library of popular contents. Information about uncached
requested files can be transferred from the cloud to the eRRHs by following
either backhaul or fronthaul modes. The backhaul mode transfers fractions of
the requested files, while the fronthaul mode transmits quantized baseband
samples as in Cloud-RAN (C-RAN). The backhaul mode allows the caches of the
eRRHs to be updated, which may lower future delivery latencies. In contrast,
the fronthaul mode enables cooperative C-RAN transmissions that may reduce the
current delivery latency. Taking into account the trade-off between current and
future delivery performance, this paper proposes an adaptive selection method
between the two delivery modes to minimize the long-term delivery latency.
Assuming an unknown and time-varying popularity model, the method is based on
model-free Reinforcement Learning (RL). Numerical results confirm the
effectiveness of the proposed RL scheme.Comment: 12 pages, 2 figure
Finding optimal memoryless policies of POMDPs under the expected average reward criterion
In this paper, partially observable Markov decision processes (POMDPs) with discrete state and action space under the average reward criterion are considered from a recent-developed sensitivity point of view. By analyzing the average-reward performance difference formula, we propose a policy iteration algorithm with step sizes to obtain an optimal or local optimal memoryless policy. This algorithm improves the policy along the same direction as the policy iteration does and suitable step sizes guarantee the convergence of the algorithm. Moreover, the algorithm can be used in Markov decision processes (MDPs) with correlated actions. Two numerical examples are provided to illustrate the applicability of the algorithm.POMDPs Performance difference Policy iteration with step sizes Correlated actions Memoryless policy
Reinforcement Learning of POMDPs using Spectral Methods
We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods. While spectral methods have been previously employed for consistent learning of (passive) latent variable models such as hidden Markov models, POMDPs are more challenging since the learner interacts with the environment and possibly changes the future observations in the process. We devise a learning algorithm running through episodes, in each episode we employ spectral techniques to learn the POMDP parameters from a trajectory generated by a fixed policy. At the end of the episode, an optimization oracle returns the optimal memoryless planning policy which maximizes the expected reward based on the estimated POMDP model. We prove an order-optimal regret bound w.r.t. the optimal memoryless policy and efficient scaling with respect to the dimensionality of observation and action spaces