1,359 research outputs found

    Multiaccess Channels with State Known to One Encoder: Another Case of Degraded Message Sets

    Full text link
    We consider a two-user state-dependent multiaccess channel in which only one of the encoders is informed, non-causally, of the channel states. Two independent messages are transmitted: a common message transmitted by both the informed and uninformed encoders, and an individual message transmitted by only the uninformed encoder. We derive inner and outer bounds on the capacity region of this model in the discrete memoryless case as well as the Gaussian case. Further, we show that the bounds for the Gaussian case are tight in some special cases.Comment: 5 pages, Proc. of IEEE International Symposium on Information theory, ISIT 2009, Seoul, Kore

    Properties of Noncommutative Renyi and Augustin Information

    Full text link
    The scaled R\'enyi information plays a significant role in evaluating the performance of information processing tasks by virtue of its connection to the error exponent analysis. In quantum information theory, there are three generalizations of the classical R\'enyi divergence---the Petz's, sandwiched, and log-Euclidean versions, that possess meaningful operational interpretation. However, these scaled noncommutative R\'enyi informations are much less explored compared with their classical counterpart, and lacking crucial properties hinders applications of these quantities to refined performance analysis. The goal of this paper is thus to analyze fundamental properties of scaled R\'enyi information from a noncommutative measure-theoretic perspective. Firstly, we prove the uniform equicontinuity for all three quantum versions of R\'enyi information, hence it yields the joint continuity of these quantities in the orders and priors. Secondly, we establish the concavity in the region of s∈(−1,0)s\in(-1,0) for both Petz's and the sandwiched versions. This completes the open questions raised by Holevo [\href{https://ieeexplore.ieee.org/document/868501/}{\textit{IEEE Trans.~Inf.~Theory}, \textbf{46}(6):2256--2261, 2000}], Mosonyi and Ogawa [\href{https://doi.org/10.1007/s00220-017-2928-4/}{\textit{Commun.~Math.~Phys}, \textbf{355}(1):373--426, 2017}]. For the applications, we show that the strong converse exponent in classical-quantum channel coding satisfies a minimax identity. The established concavity is further employed to prove an entropic duality between classical data compression with quantum side information and classical-quantum channel coding, and a Fenchel duality in joint source-channel coding with quantum side information in the forthcoming papers

    Memory effects can make the transmission capability of a communication channel uncomputable

    Full text link
    Most communication channels are subjected to noise. One of the goals of Information Theory is to add redundancy in the transmission of information so that the information is transmitted reliably and the amount of information transmitted through the channel is as large as possible. The maximum rate at which reliable transmission is possible is called the capacity. If the channel does not keep memory of its past, the capacity is given by a simple optimization problem and can be efficiently computed. The situation of channels with memory is less clear. Here we show that for channels with memory the capacity cannot be computed to within precision 1/5. Our result holds even if we consider one of the simplest families of such channels -information-stable finite state machine channels-, restrict the input and output of the channel to 4 and 1 bit respectively and allow 6 bits of memory.Comment: Improved presentation and clarified claim

    Re-proving Channel Polarization Theorems: An Extremality and Robustness Analysis

    Get PDF
    The general subject considered in this thesis is a recently discovered coding technique, polar coding, which is used to construct a class of error correction codes with unique properties. In his ground-breaking work, Ar{\i}kan proved that this class of codes, called polar codes, achieve the symmetric capacity --- the mutual information evaluated at the uniform input distribution ---of any stationary binary discrete memoryless channel with low complexity encoders and decoders requiring in the order of O(Nlog⁥N)O(N\log N) operations in the block-length NN. This discovery settled the long standing open problem left by Shannon of finding low complexity codes achieving the channel capacity. Polar coding settled an open problem in information theory, yet opened plenty of challenging problems that need to be addressed. A significant part of this thesis is dedicated to advancing the knowledge about this technique in two directions. The first one provides a better understanding of polar coding by generalizing some of the existing results and discussing their implications, and the second one studies the robustness of the theory over communication models introducing various forms of uncertainty or variations into the probabilistic model of the channel.Comment: Preview of my PhD Thesis, EPFL, Lausanne, 2014. For the full version, see http://people.epfl.ch/mine.alsan/publication

    Reliable Physical Layer Network Coding

    Full text link
    When two or more users in a wireless network transmit simultaneously, their electromagnetic signals are linearly superimposed on the channel. As a result, a receiver that is interested in one of these signals sees the others as unwanted interference. This property of the wireless medium is typically viewed as a hindrance to reliable communication over a network. However, using a recently developed coding strategy, interference can in fact be harnessed for network coding. In a wired network, (linear) network coding refers to each intermediate node taking its received packets, computing a linear combination over a finite field, and forwarding the outcome towards the destinations. Then, given an appropriate set of linear combinations, a destination can solve for its desired packets. For certain topologies, this strategy can attain significantly higher throughputs over routing-based strategies. Reliable physical layer network coding takes this idea one step further: using judiciously chosen linear error-correcting codes, intermediate nodes in a wireless network can directly recover linear combinations of the packets from the observed noisy superpositions of transmitted signals. Starting with some simple examples, this survey explores the core ideas behind this new technique and the possibilities it offers for communication over interference-limited wireless networks.Comment: 19 pages, 14 figures, survey paper to appear in Proceedings of the IEE

    Bit-Interleaved Coded Modulation Revisited: A Mismatched Decoding Perspective

    Get PDF
    We revisit the information-theoretic analysis of bit-interleaved coded modulation (BICM) by modeling the BICM decoder as a mismatched decoder. The mismatched decoding model is well-defined for finite, yet arbitrary, block lengths, and naturally captures the channel memory among the bits belonging to the same symbol. We give two independent proofs of the achievability of the BICM capacity calculated by Caire et al. where BICM was modeled as a set of independent parallel binary-input channels whose output is the bitwise log-likelihood ratio. Our first achievability proof uses typical sequences, and shows that due to the random coding construction, the interleaver is not required. The second proof is based on the random coding error exponents with mismatched decoding, where the largest achievable rate is the generalized mutual information. We show that the generalized mutual information of the mismatched decoder coincides with the infinite-interleaver BICM capacity. We also show that the error exponent -and hence the cutoff rate- of the BICM mismatched decoder is upper bounded by that of coded modulation and may thus be lower than in the infinite-interleaved model. We also consider the mutual information appearing in the analysis of iterative decoding of BICM with EXIT charts. We show that the corresponding symbol metric has knowledge of the transmitted symbol and the EXIT mutual information admits a representation as a pseudo-generalized mutual information, which is in general not achievable. A different symbol decoding metric, for which the extrinsic side information refers to the hypothesized symbol, induces a generalized mutual information lower than the coded modulation capacity.Comment: submitted to the IEEE Transactions on Information Theory. Conference version in 2008 IEEE International Symposium on Information Theory, Toronto, Canada, July 200

    Information Theory based on Non-additive Information Content

    Full text link
    We generalize the Shannon's information theory in a nonadditive way by focusing on the source coding theorem. The nonadditive information content we adopted is consistent with the concept of the form invariance structure of the nonextensive entropy. Some general properties of the nonadditive information entropy are studied, in addition, the relation between the nonadditivity qq and the codeword length is pointed out.Comment: 9 pages, no figures, RevTex, accepted for publication in Phys. Rev. E(an error in proof of theorem 1 was corrected, typos corrected

    Low-latency Ultra Reliable 5G Communications: Finite-Blocklength Bounds and Coding Schemes

    Full text link
    Future autonomous systems require wireless connectivity able to support extremely stringent requirements on both latency and reliability. In this paper, we leverage recent developments in the field of finite-blocklength information theory to illustrate how to optimally design wireless systems in the presence of such stringent constraints. Focusing on a multi-antenna Rayleigh block-fading channel, we obtain bounds on the maximum number of bits that can be transmitted within given bandwidth, latency, and reliability constraints, using an orthogonal frequency-division multiplexing system similar to LTE. These bounds unveil the fundamental interplay between latency, bandwidth, rate, and reliability. Furthermore, they suggest how to optimally use the available spatial and frequency diversity. Finally, we use our bounds to benchmark the performance of an actual coding scheme involving the transmission of short packets

    On the Performance of Short Block Codes over Finite-State Channels in the Rare-Transition Regime

    Full text link
    As the mobile application landscape expands, wireless networks are tasked with supporting different connection profiles, including real-time traffic and delay-sensitive communications. Among many ensuing engineering challenges is the need to better understand the fundamental limits of forward error correction in non-asymptotic regimes. This article characterizes the performance of random block codes over finite-state channels and evaluates their queueing performance under maximum-likelihood decoding. In particular, classical results from information theory are revisited in the context of channels with rare transitions, and bounds on the probabilities of decoding failure are derived for random codes. This creates an analysis framework where channel dependencies within and across codewords are preserved. Such results are subsequently integrated into a queueing problem formulation. For instance, it is shown that, for random coding on the Gilbert-Elliott channel, the performance analysis based on upper bounds on error probability provides very good estimates of system performance and optimum code parameters. Overall, this study offers new insights about the impact of channel correlation on the performance of delay-aware, point-to-point communication links. It also provides novel guidelines on how to select code rates and block lengths for real-time traffic over wireless communication infrastructures
    • 

    corecore