4,265 research outputs found
Resource Allocation Frameworks for Network-coded Layered Multimedia Multicast Services
The explosive growth of content-on-the-move, such as video streaming to
mobile devices, has propelled research on multimedia broadcast and multicast
schemes. Multi-rate transmission strategies have been proposed as a means of
delivering layered services to users experiencing different downlink channel
conditions. In this paper, we consider Point-to-Multipoint layered service
delivery across a generic cellular system and improve it by applying different
random linear network coding approaches. We derive packet error probability
expressions and use them as performance metrics in the formulation of resource
allocation frameworks. The aim of these frameworks is both the optimization of
the transmission scheme and the minimization of the number of broadcast packets
on each downlink channel, while offering service guarantees to a predetermined
fraction of users. As a case of study, our proposed frameworks are then adapted
to the LTE-A standard and the eMBMS technology. We focus on the delivery of a
video service based on the H.264/SVC standard and demonstrate the advantages of
layered network coding over multi-rate transmission. Furthermore, we establish
that the choice of both the network coding technique and resource allocation
method play a critical role on the network footprint, and the quality of each
received video layer.Comment: IEEE Journal on Selected Areas in Communications - Special Issue on
Fundamental Approaches to Network Coding in Wireless Communication Systems.
To appea
On Distributed Linear Estimation With Observation Model Uncertainties
We consider distributed estimation of a Gaussian source in a heterogenous
bandwidth constrained sensor network, where the source is corrupted by
independent multiplicative and additive observation noises, with incomplete
statistical knowledge of the multiplicative noise. For multi-bit quantizers, we
derive the closed-form mean-square-error (MSE) expression for the linear
minimum MSE (LMMSE) estimator at the FC. For both error-free and erroneous
communication channels, we propose several rate allocation methods named as
longest root to leaf path, greedy and integer relaxation to (i) minimize the
MSE given a network bandwidth constraint, and (ii) minimize the required
network bandwidth given a target MSE. We also derive the Bayesian Cramer-Rao
lower bound (CRLB) and compare the MSE performance of our proposed methods
against the CRLB. Our results corroborate that, for low power multiplicative
observation noises and adequate network bandwidth, the gaps between the MSE of
our proposed methods and the CRLB are negligible, while the performance of
other methods like individual rate allocation and uniform is not satisfactory
Resource Allocation in a MAC with and without security via Game Theoretic Learning
In this paper a -user fading multiple access channel with and without
security constraints is studied. First we consider a F-MAC without the security
constraints. Under the assumption of individual CSI of users, we propose the
problem of power allocation as a stochastic game when the receiver sends an ACK
or a NACK depending on whether it was able to decode the message or not. We
have used Multiplicative weight no-regret algorithm to obtain a Coarse
Correlated Equilibrium (CCE). Then we consider the case when the users can
decode ACK/NACK of each other. In this scenario we provide an algorithm to
maximize the weighted sum-utility of all the users and obtain a Pareto optimal
point. PP is socially optimal but may be unfair to individual users. Next we
consider the case where the users can cooperate with each other so as to
disagree with the policy which will be unfair to individual user. We then
obtain a Nash bargaining solution, which in addition to being Pareto optimal,
is also fair to each user.
Next we study a -user fading multiple access wiretap Channel with CSI of
Eve available to the users. We use the previous algorithms to obtain a CCE, PP
and a NBS.
Next we consider the case where each user does not know the CSI of Eve but
only its distribution. In that case we use secrecy outage as the criterion for
the receiver to send an ACK or a NACK. Here also we use the previous algorithms
to obtain a CCE, PP or a NBS. Finally we show that our algorithms can be
extended to the case where a user can transmit at different rates. At the end
we provide a few examples to compute different solutions and compare them under
different CSI scenarios.Comment: 27 pages, 12 figures. Part of the paper was presented in 2016 IEEE
Information theory and applicaitons (ITA) Workshop, San Diego, USA in Feb.
2016. Submitted to journa
A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning
Automatic decision-making approaches, such as reinforcement learning (RL),
have been applied to (partially) solve the resource allocation problem
adaptively in the cloud computing system. However, a complete cloud resource
allocation framework exhibits high dimensions in state and action spaces, which
prohibit the usefulness of traditional RL techniques. In addition, high power
consumption has become one of the critical concerns in design and control of
cloud computing systems, which degrades system reliability and increases
cooling cost. An effective dynamic power management (DPM) policy should
minimize power consumption while maintaining performance degradation within an
acceptable level. Thus, a joint virtual machine (VM) resource allocation and
power management framework is critical to the overall cloud computing system.
Moreover, novel solution framework is necessary to address the even higher
dimensions in state and action spaces. In this paper, we propose a novel
hierarchical framework for solving the overall resource allocation and power
management problem in cloud computing systems. The proposed hierarchical
framework comprises a global tier for VM resource allocation to the servers and
a local tier for distributed power management of local servers. The emerging
deep reinforcement learning (DRL) technique, which can deal with complicated
control problems with large state space, is adopted to solve the global tier
problem. Furthermore, an autoencoder and a novel weight sharing structure are
adopted to handle the high-dimensional state space and accelerate the
convergence speed. On the other hand, the local tier of distributed server
power managements comprises an LSTM based workload predictor and a model-free
RL based power manager, operating in a distributed manner.Comment: accepted by 37th IEEE International Conference on Distributed
Computing (ICDCS 2017
- …