3,932 research outputs found

    Optimal Power Allocation over Multiple Identical Gilbert-Elliott Channels

    Full text link
    We study the fundamental problem of power allocation over multiple Gilbert-Elliott communication channels. In a communication system with time varying channel qualities, it is important to allocate the limited transmission power to channels that will be in good state. However, it is very challenging to do so because channel states are usually unknown when the power allocation decision is made. In this paper, we derive an optimal power allocation policy that can maximize the expected discounted number of bits transmitted over an infinite time span by allocating the transmission power only to those channels that are believed to be good in the coming time slot. We use the concept belief to represent the probability that a channel will be good and derive an optimal power allocation policy that establishes a mapping from the channel belief to an allocation decision. Specifically, we first model this problem as a partially observable Markov decision processes (POMDP), and analytically investigate the structure of the optimal policy. Then a simple threshold-based policy is derived for a three-channel communication system. By formulating and solving a linear programming formulation of this power allocation problem, we further verified the derived structure of the optimal policy.Comment: 10 pages, 7 figure

    Spatial spectrum and energy efficiency of random cellular networks

    Get PDF
    It is a great challenge to evaluate the network performance of cellular mobile communication systems. In this paper, we propose new spatial spectrum and energy efficiency models for Poisson-Voronoi tessellation (PVT) random cellular networks. To evaluate the user access the network, a Markov chain based wireless channel access model is first proposed for PVT random cellular networks. On that basis, the outage probability and blocking probability of PVT random cellular networks are derived, which can be computed numerically. Furthermore, taking into account the call arrival rate, the path loss exponent and the base station (BS) density in random cellular networks, spatial spectrum and energy efficiency models are proposed and analyzed for PVT random cellular networks. Numerical simulations are conducted to evaluate the network spectrum and energy efficiency in PVT random cellular networks.Comment: appears in IEEE Transactions on Communications, April, 201

    First-Passage Time and Large-Deviation Analysis for Erasure Channels with Memory

    Full text link
    This article considers the performance of digital communication systems transmitting messages over finite-state erasure channels with memory. Information bits are protected from channel erasures using error-correcting codes; successful receptions of codewords are acknowledged at the source through instantaneous feedback. The primary focus of this research is on delay-sensitive applications, codes with finite block lengths and, necessarily, non-vanishing probabilities of decoding failure. The contribution of this article is twofold. A methodology to compute the distribution of the time required to empty a buffer is introduced. Based on this distribution, the mean hitting time to an empty queue and delay-violation probabilities for specific thresholds can be computed explicitly. The proposed techniques apply to situations where the transmit buffer contains a predetermined number of information bits at the onset of the data transfer. Furthermore, as additional performance criteria, large deviation principles are obtained for the empirical mean service time and the average packet-transmission time associated with the communication process. This rigorous framework yields a pragmatic methodology to select code rate and block length for the communication unit as functions of the service requirements. Examples motivated by practical systems are provided to further illustrate the applicability of these techniques.Comment: To appear in IEEE Transactions on Information Theor

    Tiny Codes for Guaranteeable Delay

    Full text link
    Future 5G systems will need to support ultra-reliable low-latency communications scenarios. From a latency-reliability viewpoint, it is inefficient to rely on average utility-based system design. Therefore, we introduce the notion of guaranteeable delay which is the average delay plus three standard deviations of the mean. We investigate the trade-off between guaranteeable delay and throughput for point-to-point wireless erasure links with unreliable and delayed feedback, by bringing together signal flow techniques to the area of coding. We use tiny codes, i.e. sliding window by coding with just 2 packets, and design three variations of selective-repeat ARQ protocols, by building on the baseline scheme, i.e. uncoded ARQ, developed by Ausavapattanakun and Nosratinia: (i) Hybrid ARQ with soft combining at the receiver; (ii) cumulative feedback-based ARQ without rate adaptation; and (iii) Coded ARQ with rate adaptation based on the cumulative feedback. Contrasting the performance of these protocols with uncoded ARQ, we demonstrate that HARQ performs only slightly better, cumulative feedback-based ARQ does not provide significant throughput while it has better average delay, and Coded ARQ can provide gains up to about 40% in terms of throughput. Coded ARQ also provides delay guarantees, and is robust to various challenges such as imperfect and delayed feedback, burst erasures, and round-trip time fluctuations. This feature may be preferable for meeting the strict end-to-end latency and reliability requirements of future use cases of ultra-reliable low-latency communications in 5G, such as mission-critical communications and industrial control for critical control messaging.Comment: to appear in IEEE JSAC Special Issue on URLLC in Wireless Network

    Joint source channel coding for progressive image transmission

    Get PDF
    Recent wavelet-based image compression algorithms achieve best ever performances with fully embedded bit streams. However, those embedded bit streams are very sensitive to channel noise and protections from channel coding are necessary. Typical error correcting capability of channel codes varies according to different channel conditions. Thus, separate design leads to performance degradation relative to what could be achieved through joint design. In joint source-channel coding schemes, the choice of source coding parameters may vary over time and channel conditions. In this research, we proposed a general approach for the evaluation of such joint source-channel coding scheme. Instead of using the average peak signal to noise ratio (PSNR) or distortion as the performance metric, we represent the system performance by its average error-free source coding rate, which is further shown to be an equivalent metric in the optimization problems. The transmissions of embedded image bit streams over memory channels and binary symmetric channels (BSCs) are investigated in this dissertation. Mathematical models were obtained in closed-form by error sequence analysis (ESA). Not surprisingly, models for BSCs are just special cases for those of memory channels. It is also discovered that existing techniques for performance evaluation on memory channels are special cases of this new approach. We further extend the idea to the unequal error protection (UEP) of embedded images sources in BSCs. The optimization problems are completely defined and solved. Compared to the equal error protection (EEP) schemes, about 0.3 dB performance gain is achieved by UEP for typical BSCs. For some memory channel conditions, the performance improvements can be up to 3 dB. Transmission of embedded image bit streams in channels with feedback are also investigated based on the model for memory channels. Compared to the best possible performance achieved on feed forward transmission, feedback leads to about 1.7 dB performance improvement
    corecore