6 research outputs found
Delay Performance of the Multiuser MISO Downlink under Imperfect CSI and Finite Length Coding
We use stochastic network calculus to investigate the delay performance of a
multiuser MISO system with zero-forcing beamforming. First, we consider ideal
assumptions with long codewords and perfect CSI at the transmitter, where we
observe a strong channel hardening effect that results in very high reliability
with respect to the maximum delay of the application. We then study the system
under more realistic assumptions with imperfect CSI and finite blocklength
channel coding. These effects lead to interference and to transmission errors,
and we derive closed-form lower and upper bounds on the resulting error
probability. Compared to the ideal case, imperfect CSI and finite length coding
cause massive degradations in the average transmission rate. Surprisingly, the
system nevertheless maintains the same qualitative behavior as in the ideal
case: as long as the average transmission rate is higher than the arrival rate,
the system can still achieve very high reliability with respect to the maximum
delay
Rate Analysis of Ultra-Reliable Low-Latency Communications in Random Wireless Networks
In this letter, we analyze the achievable rate of ultra-reliable low-latency
communications (URLLC) in a randomly modeled wireless network. We use two
mathematical tools to properly characterize the considered system: i)
stochastic geometry to model spatial locations of the transmitters in a
network, and ii) finite block-length analysis to reflect the features of the
short-packets. Exploiting these tools, we derive an integral-form expression of
the decoding error probability as a function of the target rate, the path-loss
exponent, the communication range, the density, and the channel coding length.
We also obtain a tight approximation as a closed-form. The main finding from
the analytical results is that, in URLLC, increasing the signal-to-interference
ratio (SIR) brings significant improvement of the rate performance compared to
increasing the channel coding length. Via simulations, we show that fractional
frequency reuse improves the area spectral efficiency by reducing the amount of
mutual interference
Delay Violation Probability and Effective Rate of Downlink NOMA over - Fading Channels
Non-orthogonal multiple access (NOMA) is a potential candidate to further
enhance the spectrum utilization efficiency in beyond fifth-generation (B5G)
standards. However, there has been little attention on the quantification of
the delay-limited performance of downlink NOMA systems. In this paper, we
analyze the performance of a two-user downlink NOMA system over generalized
{\alpha}-{\mu} fading in terms of delay violation probability (DVP) and
effective rate (ER). In particular, we derive an analytical expression for an
upper bound on the DVP and we derive the exact sum ER of the downlink NOMA
system. We also derive analytical expressions for high and low signal-to-noise
ratio (SNR) approximations to the sum ER, as well as a fundamental upper bound
on the sum ER which represents the ergodic sum-rate for the downlink NOMA
system. We also analyze the sum ER of a corresponding time-division-multiplexed
orthogonal multiple access (OMA) system. Our results show that while NOMA
consistently outperforms OMA over the practical SNR range, the relative gain
becomes smaller in more severe fading conditions, and is also smaller in the
presence a more strict delay quality-of-service (QoS) constraint.Comment: 14 pages, 12 figure
NOMA in the Uplink: Delay Analysis with Imperfect CSI and Finite-Length Coding
We study whether using non-orthogonal multiple access (NOMA) in the uplink of
a mobile network can improve the performance over orthogonal multiple access
(OMA) when the system requires ultra-reliable low-latency communications
(URLLC). To answer this question, we first consider an ideal system model with
perfect channel state information (CSI) at the transmitter and long codewords,
where we determine the optimal decoding orders when the decoder uses successive
interference cancellation (SIC) and derive closed-form expressions for the
optimal rate when joint decoding is used. While joint decoding performs well
even under tight delay constraints, NOMA with SIC decoding often performs worse
than OMA. For low-latency systems, we must also consider the impact of
finite-length channel coding, as well as rate adaptation based imperfect CSI.
We derive closed-form approximations for the corresponding outage or error
probabilities and find that those effects create a larger performance penalty
for NOMA than for OMA. Thus, NOMA with SIC decoding may often be unsuitable for
URLLC
Balancing Queueing and Retransmission: Latency-Optimal Massive MIMO Design
One fundamental challenge in 5G URLLC is how to optimize massive MIMO systems
for achieving low latency and high reliability. A natural design choice to
maximize reliability and minimize retransmission is to select the lowest
allowed target error rate. However, the overall latency is the sum of queueing
latency and retransmission latency, hence choosing the lowest target error rate
does not always minimize the overall latency. In this paper, we minimize the
overall latency by jointly designing the target error rate and transmission
rate adaptation, which leads to a fundamental tradeoff point between queueing
and retransmission latency. This design problem can be formulated as a Markov
decision process, which is theoretically optimal, but its complexity is
prohibitively high for real-system deployments. We managed to develop a
low-complexity closed-form policy named Large-arraY Reliability and Rate
Control (LYRRC), which is proven to be asymptotically latency-optimal as the
number of antennas increases. In LYRRC, the transmission rate is twice of the
arrival rate, and the target error rate is a function of the antenna number,
arrival rate, and channel estimation error. With simulated and measured
channels, our evaluations find LYRRC satisfies the latency and reliability
requirements of URLLC in all the tested scenarios.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible. Accepted by IEEE Transactions on Wireless Communicatio
A Tutorial on Ultra-Reliable and Low-Latency Communications in 6G: Integrating Domain Knowledge into Deep Learning
As one of the key communication scenarios in the 5th and also the 6th
generation (6G) of mobile communication networks, ultra-reliable and
low-latency communications (URLLC) will be central for the development of
various emerging mission-critical applications. State-of-the-art mobile
communication systems do not fulfill the end-to-end delay and overall
reliability requirements of URLLC. In particular, a holistic framework that
takes into account latency, reliability, availability, scalability, and
decision making under uncertainty is lacking. Driven by recent breakthroughs in
deep neural networks, deep learning algorithms have been considered as
promising ways of developing enabling technologies for URLLC in future 6G
networks. This tutorial illustrates how domain knowledge (models, analytical
tools, and optimization frameworks) of communications and networking can be
integrated into different kinds of deep learning algorithms for URLLC. We first
provide some background of URLLC and review promising network architectures and
deep learning frameworks for 6G. To better illustrate how to improve learning
algorithms with domain knowledge, we revisit model-based analytical tools and
cross-layer optimization frameworks for URLLC. Following that, we examine the
potential of applying supervised/unsupervised deep learning and deep
reinforcement learning in URLLC and summarize related open problems. Finally,
we provide simulation and experimental results to validate the effectiveness of
different learning algorithms and discuss future directions.Comment: This work has been accepted by Proceedings of the IEE