691 research outputs found

    Capacity of Coded Index Modulation

    Full text link
    We consider the special case of index coding over the Gaussian broadcast channel where each receiver has prior knowledge of a subset of messages at the transmitter and demands all the messages from the source. We propose a concatenated coding scheme for this problem, using an index code for the Gaussian channel as an inner code/modulation to exploit side information at the receivers, and an outer code to attain coding gain against the channel noise. We derive the capacity region of this scheme by viewing the resulting channel as a multiple-access channel with many receivers, and relate it to the 'side information gain' -- which is a measure of the advantage of a code in utilizing receiver side information -- of the inner index code/modulation. We demonstrate the utility of the proposed architecture by simulating the performance of an index code/modulation concatenated with an off-the-shelf convolutional code through bit-interleaved coded-modulation.Comment: To appear in Proc. IEEE Int. Symp. Inf. Theory (ISIT) 2015, Hong Kong, Jun. 2015. 5 pages, 4 figure

    Myths and Realities of Rateless Coding

    No full text
    Fixed-rate and rateless channel codes are generally treated separately in the related research literature and so, a novice in the field inevitably gets the impression that these channel codes are unrelated. By contrast, in this treatise, we endeavor to further develop a link between the traditional fixed-rate codes and the recently developed rateless codes by delving into their underlying attributes. This joint treatment is beneficial for two principal reasons. First, it facilitates the task of researchers and practitioners, who might be familiar with fixed-rate codes and would like to jump-start their understanding of the recently developed concepts in the rateless reality. Second, it provides grounds for extending the use of the well-understood code design tools — originally contrived for fixed-rate codes — to the realm of rateless codes. Indeed, these versatile tools proved to be vital in the design of diverse fixed-rate-coded communications systems, and thus our hope is that they will further elucidate the associated performance ramifications of the rateless coded schemes

    Fundamental limitations on communication channels with noisy feedback: information flow, capacity and bounds

    Get PDF
    Since the success of obtaining the capacity (i.e. the maximal achievable transmission rate under which the message can be recovered with arbitrarily small probability of error) for non-feedback point-to-point communication channels by C. Shannon (in 1948), Information Theory has been proved to be a powerful tool to derive fundamental limitations in communication systems. During the last decade, motivated by the emerging of networked systems, information theorists have turned lots of their attention to communication channels with feedback (through another channel from receiver to transmitter). Under the assumption that the feedback channel is noiseless, a large body of notable results have been derived, although much work still needs to be done. However, when this ideal assumption is removed, i.e., the feedback channel is noisy, only few valuable results can be found in the literature and many challenging problems are still open. This thesis aims to address some of these long-standing noisy feedback problems, with concentration on the channel capacity. First of all, we analyze the fundamental information flow in noisy feedback channels. We introduce a new notion, the residual directed information, in order to characterize the noisy feedback channel capacity for which the standard directed information can not be used. As an illustration, finite-alphabet noisy feedback channels have been studied in details. Next, we provide an information flow decomposition equality which serves as a foundation of other novel results in this thesis. With the result of information flow decomposition in hand, we next investigate time-varying Gaussian channels with additive Gaussian noise feedback. Following the notable Cover-Pombra results in 1989, we define the n-block noisy feedback capacity and derive a pair of n-block upper and lower bounds on the n-block noisy feedback capacity. These bounds can be obtained by efficiently solving convex optimization problems. Under the assumption of stationarity on the additive Gaussian noises, we show that the limits of these n-block bounds can be characterized in a power spectral optimization form. In addition, two computable lower bounds are derived for the Shannon capacity. Next, we consider a class of channels where feedback could not increase the capacity and thus the noisy feedback capacity equals to the non-feedback capacity. We derive a necessary condition (characterized by the directed information) for the capacity-achieving channel codes. The condition implies that using noisy feedback is detrimental to achievable rate, i.e, the capacity can not be achieved by using noisy feedback. Finally, we introduce a new framework of communication channels with noisy feedback where the feedback information received by the transmitter is also available to the decoder with some finite delays. We investigate the capacity and linear coding schemes for this extended noisy feedback channels. To summarize, this thesis firstly provides a foundation (i.e. information flow analysis) for analyzing communications channels with noisy feedback. In light of this analysis, we next present a sequence of novel results, e.g. channel coding theorem, capacity bounds, etc., which result in a significant step forward to address the long-standing noisy feedback problem

    Nested turbo codes for the costa problem

    Get PDF
    Driven by applications in data-hiding, MIMO broadcast channel coding, precoding for interference cancellation, and transmitter cooperation in wireless networks, Costa coding has lately become a very active research area. In this paper, we first offer code design guidelines in terms of source- channel coding for algebraic binning. We then address practical code design based on nested lattice codes and propose nested turbo codes using turbo-like trellis-coded quantization (TCQ) for source coding and turbo trellis-coded modulation (TTCM) for channel coding. Compared to TCQ, turbo-like TCQ offers structural similarity between the source and channel coding components, leading to more efficient nesting with TTCM and better source coding performance. Due to the difference in effective dimensionality between turbo-like TCQ and TTCM, there is a performance tradeoff between these two components when they are nested together, meaning that the performance of turbo-like TCQ worsens as the TTCM code becomes stronger and vice versa. Optimization of this performance tradeoff leads to our code design that outperforms existing TCQ/TCM and TCQ/TTCM constructions and exhibits a gap of 0.94, 1.42 and 2.65 dB to the Costa capacity at 2.0, 1.0, and 0.5 bits/sample, respectively
    corecore