5 research outputs found
Tiny Codes for Guaranteeable Delay
Future 5G systems will need to support ultra-reliable low-latency
communications scenarios. From a latency-reliability viewpoint, it is
inefficient to rely on average utility-based system design. Therefore, we
introduce the notion of guaranteeable delay which is the average delay plus
three standard deviations of the mean. We investigate the trade-off between
guaranteeable delay and throughput for point-to-point wireless erasure links
with unreliable and delayed feedback, by bringing together signal flow
techniques to the area of coding. We use tiny codes, i.e. sliding window by
coding with just 2 packets, and design three variations of selective-repeat ARQ
protocols, by building on the baseline scheme, i.e. uncoded ARQ, developed by
Ausavapattanakun and Nosratinia: (i) Hybrid ARQ with soft combining at the
receiver; (ii) cumulative feedback-based ARQ without rate adaptation; and (iii)
Coded ARQ with rate adaptation based on the cumulative feedback. Contrasting
the performance of these protocols with uncoded ARQ, we demonstrate that HARQ
performs only slightly better, cumulative feedback-based ARQ does not provide
significant throughput while it has better average delay, and Coded ARQ can
provide gains up to about 40% in terms of throughput. Coded ARQ also provides
delay guarantees, and is robust to various challenges such as imperfect and
delayed feedback, burst erasures, and round-trip time fluctuations. This
feature may be preferable for meeting the strict end-to-end latency and
reliability requirements of future use cases of ultra-reliable low-latency
communications in 5G, such as mission-critical communications and industrial
control for critical control messaging.Comment: to appear in IEEE JSAC Special Issue on URLLC in Wireless Network
The Interplay of Spectral Efficiency, User Density, and Energy in Grant-based Access Protocols
We employ grant-based access with retransmissions for multiple users with
small payloads, particularly at low spectral efficiency (SE). The radio
resources are allocated via NOMA in the time into slots and frequency
dimensions, with a measure of non-orthogonality . Retransmissions are
stored in a receiver buffer with a finite size and combined via
HARQ, using Chase Combining (CC) and Incremental Redundancy (IR). We determine
the best scaling for the SE (bits/rdof) and for the user density , for a
given number of users and a blocklength , versus SNR () per bit,
i.e., the ratio , for the sum-rate optimal regime and when the
interference is treated as noise (TIN), using a finite blocklength analysis.
Contrasting the classical scheme (no retransmissions) with CC-NOMA, CC-OMA, and
IR-OMA strategies in TIN and sum-rate optimal cases, the numerical results on
the SE demonstrate that CC-NOMA outperforms, almost in all regimes, the other
approaches. In the sum-rate optimal regime, the scalings of versus
deteriorate with , yet from the most degraded to the least, the
ordering of the schemes is as (i) classical, (ii) CC-OMA, (iii) IR-OMA, and
(iv) CC-NOMA, demonstrating the robustness of CC-NOMA. Contrasting TIN models
at low , the scalings of for CC-based models improve the best,
whereas, at high , the scaling of CC-NOMA is poor due to higher
interference, and CC-OMA becomes prominent due to combining retransmissions and
its reduced interference. The scaling results are applicable over a range of
, , , and , at low received SNR. The proposed
analytical framework provides insights into resource allocation in grant-based
access and specific 5G use cases for massive URLLC uplink access.Comment: A short version in WiOpt'22, and this version in TCOM'2