1,209 research outputs found
Optimizing resource allocation in URLLC for real-time wireless control systems
As one of the three main scenarios in the fifth-generation (5G) cellular networks, ultra-reliable and low-latency communication (URLLC) can be served as an enabler for real-time wireless control systems. In such a system, the communication resource consumption in URLLC and the control subsystem performance are mutually dependent. To optimize the overall system performance, it is critical to integrate URLLC and control subsystems together by formulating a co-design problem. In this paper, based on uplink transmission, we study the resource allocation problem for URLLC in real-time wireless control systems. The problem is conducted by optimizing bandwidth and transmission power allocation in URLLC and control convergence rate subject to the constraints on communication and control. To formulate and solve the problem, we first convert the control convergence rate requirement into a communication reliability constraint. Then, the co-design problem can be replaced by a regular wireless resource allocation problem. By proving the converted problem is concave, an iteration algorithm is proposed to find the optimal communication resource allocation. Based on that, the optimal control convergence rate can be obtained to optimize overall system performance. Simulation results show remarkable performance gain in terms of spectral efficiency and control cost. Compared with the scheme of satisfying fixed quality-of-service in traditional URLLC design, our method can adjust optimal spectrum allocation to maximize the communication spectral efficiency and maintain the actual control requirement
Optimal resource allocation in URLLC for real-time wireless control systems
As one of the most important communication scenarios
in the coming fifth generation (5G) cellular networks, ultrareliable
and low-latency communication (URLLC) is promising
to enable real-time wireless control systems. However, one of
the biggest challenges is that how to integrate URLLC and
control performance together to maximize the overall system
performance. In this paper, we investigate the resource allocation
for URLLC uplink in real-time wireless control systems. Specifically,
we first discuss the relationship between communication
and control performance. Based on that, we convert the hybrid
co-design problem into a regular wireless resource allocation
problem. Then, we propose an iteration algorithm to obtain the
optimal wireless resource allocation. Simulation results indicate
the performance of our method
Packet-drop design in URLLC for real-time wireless control systems
In real-time wireless control systems, ultra-reliable and low-latency communication (URLLC) is critical for the connection between the remote controller and its control objective. Since both transmission delay and packet loss can lead to control performance loss, our goal is to optimize control performance by jointly considering control and URLLC constraints in this paper. To achieve this goal, we formulate an optimal problem to minimize control cost by optimizing the packet drop and wireless resource allocation. To solve the problem, we analyze the relationship between communication and control. Then, based on the relationship, we decompose the original problem into two subproblems: 1) an optimal packet-drop problem to minimize control cost and 2) an optimal resource allocation problem to minimize communication packet error. Finally, the corresponding solutions for each subproblem can be obtained. Compared with the traditional method only considering the communication aspect, the proposed packet-drop and resource allocation method shows remarkable performance gain in terms of control cost
Joint Scheduling of URLLC and eMBB Traffic in 5G Wireless Networks
Emerging 5G systems will need to efficiently support both enhanced mobile
broadband traffic (eMBB) and ultra-low-latency communications (URLLC) traffic.
In these systems, time is divided into slots which are further sub-divided into
minislots. From a scheduling perspective, eMBB resource allocations occur at
slot boundaries, whereas to reduce latency URLLC traffic is pre-emptively
overlapped at the minislot timescale, resulting in selective
superposition/puncturing of eMBB allocations. This approach enables minimal
URLLC latency at a potential rate loss to eMBB traffic.
We study joint eMBB and URLLC schedulers for such systems, with the dual
objectives of maximizing utility for eMBB traffic while immediately satisfying
URLLC demands. For a linear rate loss model (loss to eMBB is linear in the
amount of URLLC superposition/puncturing), we derive an optimal joint
scheduler. Somewhat counter-intuitively, our results show that our dual
objectives can be met by an iterative gradient scheduler for eMBB traffic that
anticipates the expected loss from URLLC traffic, along with an URLLC demand
scheduler that is oblivious to eMBB channel states, utility functions and
allocation decisions of the eMBB scheduler. Next we consider a more general
class of (convex/threshold) loss models and study optimal online joint
eMBB/URLLC schedulers within the broad class of channel state dependent but
minislot-homogeneous policies. A key observation is that unlike the linear rate
loss model, for the convex and threshold rate loss models, optimal eMBB and
URLLC scheduling decisions do not de-couple and joint optimization is necessary
to satisfy the dual objectives. We validate the characteristics and benefits of
our schedulers via simulation
GAN-powered Deep Distributional Reinforcement Learning for Resource Management in Network Slicing
Network slicing is a key technology in 5G communications system. Its purpose
is to dynamically and efficiently allocate resources for diversified services
with distinct requirements over a common underlying physical infrastructure.
Therein, demand-aware resource allocation is of significant importance to
network slicing. In this paper, we consider a scenario that contains several
slices in a radio access network with base stations that share the same
physical resources (e.g., bandwidth or slots). We leverage deep reinforcement
learning (DRL) to solve this problem by considering the varying service demands
as the environment state and the allocated resources as the environment action.
In order to reduce the effects of the annoying randomness and noise embedded in
the received service level agreement (SLA) satisfaction ratio (SSR) and
spectrum efficiency (SE), we primarily propose generative adversarial
network-powered deep distributional Q network (GAN-DDQN) to learn the
action-value distribution driven by minimizing the discrepancy between the
estimated action-value distribution and the target action-value distribution.
We put forward a reward-clipping mechanism to stabilize GAN-DDQN training
against the effects of widely-spanning utility values. Moreover, we further
develop Dueling GAN-DDQN, which uses a specially designed dueling generator, to
learn the action-value distribution by estimating the state-value distribution
and the action advantage function. Finally, we verify the performance of the
proposed GAN-DDQN and Dueling GAN-DDQN algorithms through extensive
simulations
Deep Reinforcement Learning for Resource Management in Network Slicing
Network slicing is born as an emerging business to operators, by allowing
them to sell the customized slices to various tenants at different prices. In
order to provide better-performing and cost-efficient services, network slicing
involves challenging technical issues and urgently looks forward to intelligent
innovations to make the resource management consistent with users' activities
per slice. In that regard, deep reinforcement learning (DRL), which focuses on
how to interact with the environment by trying alternative actions and
reinforcing the tendency actions producing more rewarding consequences, is
assumed to be a promising solution. In this paper, after briefly reviewing the
fundamental concepts of DRL, we investigate the application of DRL in solving
some typical resource management for network slicing scenarios, which include
radio resource slicing and priority-based core network slicing, and demonstrate
the advantage of DRL over several competing schemes through extensive
simulations. Finally, we also discuss the possible challenges to apply DRL in
network slicing from a general perspective.Comment: The manuscript has been accepted by IEEE Access in Nov. 201
Ultra-Reliable Low-Latency Vehicular Networks: Taming the Age of Information Tail
While the notion of age of information (AoI) has recently emerged as an
important concept for analyzing ultra-reliable low-latency communications
(URLLC), the majority of the existing works have focused on the average AoI
measure. However, an average AoI based design falls short in properly
characterizing the performance of URLLC systems as it cannot account for
extreme events that occur with very low probabilities. In contrast, in this
paper, the main objective is to go beyond the traditional notion of average AoI
by characterizing and optimizing a URLLC system while capturing the AoI tail
distribution. In particular, the problem of vehicles' power minimization while
ensuring stringent latency and reliability constraints in terms of
probabilistic AoI is studied. To this end, a novel and efficient mapping
between both AoI and queue length distributions is proposed. Subsequently,
extreme value theory (EVT) and Lyapunov optimization techniques are adopted to
formulate and solve the problem. Simulation results shows a nearly two-fold
improvement in terms of shortening the tail of the AoI distribution compared to
a baseline whose design is based on the maximum queue length among vehicles,
when the number of vehicular user equipment (VUE) pairs is 80. The results also
show that this performance gain increases significantly as the number of VUE
pairs increases.Comment: Accepted in IEEE GLOBECOM 2018 with 7 pages, 6 figure
- …