548 research outputs found

    Ultra-Reliable Short-Packet Communications: Fundamental Limits and Enabling Technologies

    Get PDF
    The paradigm shift from 4G to 5G communications, anticipated to enable ultra-reliable low-latency communications (URLLC), will enforce a radical change in the design of wireless communication systems. Unlike in 4G systems, where the main objective is to provide a large transmission rate, in URLLC, as implied by its name, the objective is to enable transmissions with low latency and, simultaneously, very high reliability. Since low latency implies the use of short data packets, the tension between blocklength and reliability is studied in URLLC.Several key enablers for URLLC communications have been designated in the literature. Of special importance are diversity-enabling technologies such as multiantenna systems and feedback protocols. Furthermore, it is not only important to introduce additional diversity by means of the above examples, one must also guarantee that thescarce number of channel uses are used in an optimal way. Therefore, it is imperative to develop design guidelines for how to enable reliable detection of incoming data, how to acquire channel-state information, and how to construct efficient short-packet channel codes. The development of such guidelines is at the heart of this thesis. This thesis focuses on the fundamental performance of URLLC-enabling technologies. Specifically, we provide converse (upper) bounds and achievability (lower) bounds on the maximum coding rate, based on finite-blocklength information theory, for systems that employ the key enablers outlined above. With focus on the wireless channel, modeled via a block-fading assumption, we are able to provide answers to questions like: howto optimally utilize spatial and frequency diversity, how far from optimal short-packet channel codes perform, how multiantenna systems should be designed to serve a given number of users, and how to design feedback schemes when the feedback link is noisy. In particular, this thesis is comprised out of four papers. In Paper A, we study the short-packet performance over the Rician block-fading channel. In particular, we present achievability bounds for pilot-assisted transmission with several different decoders that allow us to quantify the impact, on the achievable performance, of imposed pilots and mismatched decoding. Furthermore, we design short-packet channel codes that perform within 1 dB of our achievability bounds. Paper B studies multiuser massive multiple-input multiple-output systems with short packets. We provide an achievability bound on the average error probability over quasistatic spatially correlated Rayleigh-fading channels. The bound applies to arbitrary multiuser settings, pilot-assisted transmission, and mismatched decoding. This makes it suitable to assess the performance in the uplink/downlink for arbitrary linear signal processing. We show that several lessons learned from infinite-blocklength analyses carry over to the finite-blocklength regime. Furthermore, for the multicell setting with randomly placed users, pilot contamination should be avoided at all cost and minimum mean-squared error signal processing should be used to comply with the stringent requirements of URLLC.In Paper C, we consider sporadic transmissions where the task of the receiver is to both detect and decode an incoming packet. Two novel achievability bounds, and a novel converse bound are presented for joint detection-decoding strategies. It is shown that errors associated with detection deteriorates performance significantly for very short packet sizes. Numerical results also indicate that separate detection-decoding strategies are strictly suboptimal over block-fading channels.Finally, in Paper D, variable-length codes with noisy stop-feedback are studied via a novel achievability bound on the average service time and the average error probability. We use the bound to shed light on the resource allocation problem between the forward and the feedback channel. For URLLC applications, it is shown that enough resources must be assigned to the feedback link such that a NACK-to-ACK error becomes rarer than the target error probability. Furthermore, we illustrate that the variable-length stop-feedback scheme outperforms state-of-the-art fixed-length no-feedback bounds even when the stop-feedback bit is noisy

    Reliable Transmission of Short Packets through Queues and Noisy Channels under Latency and Peak-Age Violation Guarantees

    Get PDF
    This work investigates the probability that the delay and the peak-age of information exceed a desired threshold in a point-to-point communication system with short information packets. The packets are generated according to a stationary memoryless Bernoulli process, placed in a single-server queue and then transmitted over a wireless channel. A variable-length stop-feedback coding scheme---a general strategy that encompasses simple automatic repetition request (ARQ) and more sophisticated hybrid ARQ techniques as special cases---is used by the transmitter to convey the information packets to the receiver. By leveraging finite-blocklength results, the delay violation and the peak-age violation probabilities are characterized without resorting to approximations based on large-deviation theory as in previous literature. Numerical results illuminate the dependence of delay and peak-age violation probability on system parameters such as the frame size and the undetected error probability, and on the chosen packet-management policy. The guidelines provided by our analysis are particularly useful for the design of low-latency ultra-reliable communication systems.Comment: To appear in IEEE journal on selected areas of communication (IEEE JSAC

    Fundamental limits of short-packet wireless communications

    Get PDF
    Mención Internacional en el título de doctorThis thesis concerns the maximum coding rate at which data can be transmitted over a noncoherent, single-antenna, Rayleigh block-fading channel using an errorcorrecting code of a given blocklength with a block-error probability not exceeding a given value. This is an emerging problem originated by the next generation of wireless communications, where the understanding of the fundamental limits in the transmission of short packets is crucial. For this setting, traditional informationtheoretical metrics of performance that rely on the transmission of long packets, such as capacity or outage capacity, are not good benchmarks anymore, and the study of the maximum coding rate as a function of the blocklength is needed. For the noncoherent Rayleigh block-fading channel model, to study the maximum coding rate as a function of the blocklength, only nonasymptotic bounds that must be evaluated numerically were available in the literature. The principal drawback of the nonasymptotic bounds is their high computational cost, which increases linearly with the number of blocks (also called throughout this thesis coherence intervals) needed to transmit a given codeword. By means of different asymptotic expansions in the number of blocks, this thesis provides an alternative way of studying the maximum coding rate as a function of the blocklength for the noncoherent, single-antenna, Rayleigh block-fading channel. The first approximation on the maximum coding rate derived in this thesis is a high-SNR normal approximation. This central-limit-theorem-based approximation becomes accurate as the signal-to-noise ratio (SNR) and the number of coherence intervals L of size T tend to infinity. We show that the high-SNR normal approximation is roughly equal to the normal approximation one obtains by transmitting one pilot symbol per coherence block to estimate the fading coefficient, and by then transmitting T−1 symbols per coherence block over a coherent fading channel. This suggests that, at high SNR, one pilot symbol per coherence block suffices to achieve both the capacity and the channel dispersion. While the approximation was derived under the assumption that the number of coherence intervals and the SNR tend to infinity, numerical analyses suggest that it becomes accurate already at SNR values of 15 dB, for 10 coherence intervals or more, and probabilities of error of 10−3 or more. The derived normal approximation is not only useful because it complements the nonasymptotic bounds available in the literature, but also because it lays the foundation for analytical studies that analyze the behavior of the maximum coding rate as a function of system parameters such as SNR, number of coherence intervals, or blocklength. An example of such a study concerns the optimal design of a simple slotted-ALOHA protocol, which is also given in this thesis. Since a big amount of services and applications in the next generation of wireless communication systems will require to operate at low SNRs and small probabilities of error (for instance, SNR values of 0 dB and probabilities of error of 10−6), the second half of this thesis presents saddlepoint approximations of upper and lower nonasymptotic bounds on the maximum coding rate that are accurate in that regime. Similar to the normal approximation, these approximations become accurate as the number of coherence intervals L increases, and they can be calculated efficiently. Indeed, compared to the nonasymptotic bounds, which require the evaluation of L-dimensional integrals, the saddlepoint approximations only require the evaluation of four one-dimensional integrals. Although developed under the assumption of large L, the saddlepoint approximations are shown to be accurate even for L = 1 and SNR values of 0 dB or more. The small computational cost of these approximations can be further avoided by performing high-SNR saddlepoint approximations that can be evaluated in closed form. These approximations can be applied when some conditions of convergence are satisfied and are shown to be accurate for 10 dB or more. In our analysis, the saddlepoint method is applied to the tail probabilities appearing in the nonasymptotic bounds. These probabilities often depend on a set of parameters, such as the SNR. Existing saddlepoint expansions do not consider such dependencies. Hence, they can only characterize the behavior of the expansion error in function of the number of coherence intervals L, but not in terms of the remaining parameters. In contrast, we derive a saddlepoint expansion for random variables whose distribution depends on an extra parameter, carefully analyze the error terms, and demonstrate that they are uniform in such an extra parameter. We then apply the expansion to the Rayleigh block-fading channel and obtain approximations in which the error terms depend only on the blocklength and are uniform in the remaining parameters. Furthermore, the proposed approximations are shown to recover the normal approximation and the reliability function of the channel, thus providing a unifying tool for the two regimes, which are usually considered separately in the literature. Specifically, we show that the high-SNR normal approximation can be recovered from the normal approximation derived from the saddlepoint approximations. By means of the error exponent analysis that recovers the reliability function of the channel, we also obtain easier-to-evaluate approximations of the saddlepoint approximations consisting of the error exponent of the channel multiplied by a subexponential factor. Numerical evidence suggests that these approximations are as accurate as the saddlepoint approximations. Finally, this thesis includes a practical case study where we analyze the benefit of cooperation in optical wireless communications, a promising technology that can play an important role in the next generation of wireless communications due to the high data rates it can achieve. Specifically, a cooperative multipoint transmission and reception scheme is evaluated for visible light communication (VLC) in an indoor scenario. The proposed scheme is shown to provide SNR improvements of 3 dB or more compared to a noncooperative scheme, especially when there is non-line-of-sight (NLOS) between the access point and the receiver.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Joerg Widmer.- Secretario: Matilde Pilar Sánchez Fernández.- Vocal: Petar Popovsk

    Saddlepoint Approximations of Cumulative Distribution Functions of Sums of Random Vectors

    Get PDF
    In this report, a real-valued function that approximates the cumulative distribution function (CDF) of a finite sum of real-valued independent and identically distributed random vectors is presented. The approximation error is upper bounded and thus, as a byproduct, an upper bound and a lower bound on the CDF are obtained. Finally, it is observed that in the case of lattice and absolutely continuous random variables, the proposed approximation is identical to the saddlepoint approximation of the CDF.Dans ce rapport, une fonction qui approxime la fonction de répartition d'une somme de vecteurs aléatoires indépendants et identiquement distribués est présentée. L'erreur d'approximation est majorée, et par consequent, une borne supérieure et une borne inférieure sur la fonction de répartition sont obtenues. Finalement, pour des vecteurs aléatoires absolument continues ou lattices, l'approximation proposée est identique à l'approximation du point de selle de la fonction de répartition

    Using Saddlepoint Approximations and Likelihood-Based Methods to Conduct Statistical Inference for the Mean of the Beta Distribution

    Get PDF
    The prevalence of conducting statistical inference for the mean of the beta distribution has been rising in various fields of academic research, such as in immunology that analyzes proportions of rare cell population subsets. For our purposes, we will address this statistical inference problem by using likelihood-based applications to hypothesis testing, along with a relatively new statistical method called saddlepoint approximations. Through simulation work, we will compare the performance of these statistical procedures and provide both the statistical and scientific communities with recommendations on best practices

    Composite CDMA - A statistical mechanics analysis

    Get PDF
    Code Division Multiple Access (CDMA) in which the spreading code assignment to users contains a random element has recently become a cornerstone of CDMA research. The random element in the construction is particular attractive as it provides robustness and flexibility in utilising multi-access channels, whilst not making significant sacrifices in terms of transmission power. Random codes are generated from some ensemble, here we consider the possibility of combining two standard paradigms, sparsely and densely spread codes, in a single composite code ensemble. The composite code analysis includes a replica symmetric calculation of performance in the large system limit, and investigation of finite systems through a composite belief propagation algorithm. A variety of codes are examined with a focus on the high multi-access interference regime. In both the large size limit and finite systems we demonstrate scenarios in which the composite code has typical performance exceeding sparse and dense codes at equivalent signal to noise ratio.Comment: 23 pages, 11 figures, Sigma Phi 2008 conference submission - submitted to J.Stat.Mec

    Tilted Euler characteristic densities for Central Limit random fields, with application to "bubbles"

    Full text link
    Local increases in the mean of a random field are detected (conservatively) by thresholding a field of test statistics at a level uu chosen to control the tail probability or pp-value of its maximum. This pp-value is approximated by the expected Euler characteristic (EC) of the excursion set of the test statistic field above uu, denoted Eφ(Au)\mathbb{E}\varphi(A_u). Under isotropy, one can use the expansion Eφ(Au)=kVkρk(u)\mathbb{E}\varphi(A_u)=\sum_k\mathcal{V}_k\rho_k(u), where Vk\mathcal{V}_k is an intrinsic volume of the parameter space and ρk\rho_k is an EC density of the field. EC densities are available for a number of processes, mainly those constructed from (multivariate) Gaussian fields via smooth functions. Using saddlepoint methods, we derive an expansion for ρk(u)\rho_k(u) for fields which are only approximately Gaussian, but for which higher-order cumulants are available. We focus on linear combinations of nn independent non-Gaussian fields, whence a Central Limit theorem is in force. The threshold uu is allowed to grow with the sample size nn, in which case our expression has a smaller relative asymptotic error than the Gaussian EC density. Several illustrative examples including an application to "bubbles" data accompany the theory.Comment: Published in at http://dx.doi.org/10.1214/07-AOS549 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore