626 research outputs found
Feasibility Study of Enabling V2X Communications by LTE-Uu Radio Interface
Compared with the legacy wireless networks, the next generation of wireless
network targets at different services with divergent QoS requirements, ranging
from bandwidth consuming video service to moderate and low date rate machine
type services, and supporting as well as strict latency requirements. One
emerging new service is to exploit wireless network to improve the efficiency
of vehicular traffic and public safety. However, the stringent packet
end-to-end (E2E) latency and ultra-low transmission failure rates pose
challenging requirements on the legacy networks. In other words, the next
generation wireless network needs to support ultra-reliable low latency
communications (URLLC) involving new key performance indicators (KPIs) rather
than the conventional metric, such as cell throughput in the legacy systems. In
this paper, a feasibility study on applying today's LTE network infrastructure
and LTE-Uu air interface to provide the URLLC type of services is performed,
where the communication takes place between two traffic participants (e.g.,
vehicle-to-vehicle and vehicle-to-pedestrian). To carry out this study, an
evaluation methodology of the cellular vehicle-to-anything (V2X) communication
is proposed, where packet E2E latency and successful transmission rate are
considered as the key performance indicators (KPIs). Then, we describe the
simulation assumptions for the evaluation. Based on them, simulation results
are depicted that demonstrate the performance of the LTE network in fulfilling
new URLLC requirements. Moreover, sensitivity analysis is also conducted
regarding how to further improve system performance, in order to enable new
emerging URLLC services.Comment: Accepted by IEEE/CIC ICCC 201
Joint Scheduling of URLLC and eMBB Traffic in 5G Wireless Networks
Emerging 5G systems will need to efficiently support both enhanced mobile
broadband traffic (eMBB) and ultra-low-latency communications (URLLC) traffic.
In these systems, time is divided into slots which are further sub-divided into
minislots. From a scheduling perspective, eMBB resource allocations occur at
slot boundaries, whereas to reduce latency URLLC traffic is pre-emptively
overlapped at the minislot timescale, resulting in selective
superposition/puncturing of eMBB allocations. This approach enables minimal
URLLC latency at a potential rate loss to eMBB traffic.
We study joint eMBB and URLLC schedulers for such systems, with the dual
objectives of maximizing utility for eMBB traffic while immediately satisfying
URLLC demands. For a linear rate loss model (loss to eMBB is linear in the
amount of URLLC superposition/puncturing), we derive an optimal joint
scheduler. Somewhat counter-intuitively, our results show that our dual
objectives can be met by an iterative gradient scheduler for eMBB traffic that
anticipates the expected loss from URLLC traffic, along with an URLLC demand
scheduler that is oblivious to eMBB channel states, utility functions and
allocation decisions of the eMBB scheduler. Next we consider a more general
class of (convex/threshold) loss models and study optimal online joint
eMBB/URLLC schedulers within the broad class of channel state dependent but
minislot-homogeneous policies. A key observation is that unlike the linear rate
loss model, for the convex and threshold rate loss models, optimal eMBB and
URLLC scheduling decisions do not de-couple and joint optimization is necessary
to satisfy the dual objectives. We validate the characteristics and benefits of
our schedulers via simulation
Deep Reinforcement Learning for Resource Management in Network Slicing
Network slicing is born as an emerging business to operators, by allowing
them to sell the customized slices to various tenants at different prices. In
order to provide better-performing and cost-efficient services, network slicing
involves challenging technical issues and urgently looks forward to intelligent
innovations to make the resource management consistent with users' activities
per slice. In that regard, deep reinforcement learning (DRL), which focuses on
how to interact with the environment by trying alternative actions and
reinforcing the tendency actions producing more rewarding consequences, is
assumed to be a promising solution. In this paper, after briefly reviewing the
fundamental concepts of DRL, we investigate the application of DRL in solving
some typical resource management for network slicing scenarios, which include
radio resource slicing and priority-based core network slicing, and demonstrate
the advantage of DRL over several competing schemes through extensive
simulations. Finally, we also discuss the possible challenges to apply DRL in
network slicing from a general perspective.Comment: The manuscript has been accepted by IEEE Access in Nov. 201
- …