13 research outputs found
Distributed Structure: Joint Expurgation for the Multiple-Access Channel
In this work we show how an improved lower bound to the error exponent of the
memoryless multiple-access (MAC) channel is attained via the use of linear
codes, thus demonstrating that structure can be beneficial even in cases where
there is no capacity gain. We show that if the MAC channel is modulo-additive,
then any error probability, and hence any error exponent, achievable by a
linear code for the corresponding single-user channel, is also achievable for
the MAC channel. Specifically, for an alphabet of prime cardinality, where
linear codes achieve the best known exponents in the single-user setting and
the optimal exponent above the critical rate, this performance carries over to
the MAC setting. At least at low rates, where expurgation is needed, our
approach strictly improves performance over previous results, where expurgation
was used at most for one of the users. Even when the MAC channel is not
additive, it may be transformed into such a channel. While the transformation
is lossy, we show that the distributed structure gain in some "nearly additive"
cases outweighs the loss, and thus the error exponent can improve upon the best
known error exponent for these cases as well. Finally we apply a similar
approach to the Gaussian MAC channel. We obtain an improvement over the best
known achievable exponent, given by Gallager, for certain rate pairs, using
lattice codes which satisfy a nesting condition.Comment: Submitted to the IEEE Trans. Info. Theor
Bounds for Gaussian Broadcast Channels with Finite Blocklength
International audienceWe analyze the achievable performance of superposition coding in a two–receiver Gaussian broadcast channel (BC) with finite blocklength. To this end, we adapt the achievability bound on maximal code size of a point-to-point (P2P) channel introduced by Polyanskiy et al. in 2010 to the broadcast setting. Additionally, a new converse bound on maximal code sizes of each user in a two-user Gaussian BC is introduced for a given probability of error.– Dans cet article, nous etudions les performances atteignables d'une transmission par superposition de codes dans un canal Gaussienà Gaussien`Gaussienà diffusion avec deux récepteurs et en régimè a longueur de codes finie. Dans ce but, nous adaptons la borne atteignable introduite par Polyanskiy et al. en 2010 pour un canal pointà point`pointà point au canaì a diffusion. De plus, une nouvelle borne supérieure (converse) est proposée et caractérise les débits conjoints non atteignables en fonction de la longueur des codes et pour une probabilité d'erreur donnée
Random Coding Error Exponents for the Two-User Interference Channel
This paper is about deriving lower bounds on the error exponents for the
two-user interference channel under the random coding regime for several
ensembles. Specifically, we first analyze the standard random coding ensemble,
where the codebooks are comprised of independently and identically distributed
(i.i.d.) codewords. For this ensemble, we focus on optimum decoding, which is
in contrast to other, suboptimal decoding rules that have been used in the
literature (e.g., joint typicality decoding, treating interference as noise,
etc.). The fact that the interfering signal is a codeword, rather than an
i.i.d. noise process, complicates the application of conventional techniques of
performance analysis of the optimum decoder. Also, unfortunately, these
conventional techniques result in loose bounds. Using analytical tools rooted
in statistical physics, as well as advanced union bounds, we derive
single-letter formulas for the random coding error exponents. We compare our
results with the best known lower bound on the error exponent, and show that
our exponents can be strictly better. Then, in the second part of this paper,
we consider more complicated coding ensembles, and find a lower bound on the
error exponent associated with the celebrated Han-Kobayashi (HK) random coding
ensemble, which is based on superposition coding.Comment: accepted IEEE Transactions on Information Theor