3,110 research outputs found

    Improved error bounds for the erasure/list scheme: the binary and spherical cases

    Full text link
    We derive improved bounds on the error and erasure rate for spherical codes and for binary linear codes under Forney's erasure/list decoding scheme and prove some related results.Comment: 18 pages, 3 figures. Submitted to IEEE Transactions on Informatin Theory in May 2001, will appear in Oct. 2004 (tentative

    Distributed Structure: Joint Expurgation for the Multiple-Access Channel

    Full text link
    In this work we show how an improved lower bound to the error exponent of the memoryless multiple-access (MAC) channel is attained via the use of linear codes, thus demonstrating that structure can be beneficial even in cases where there is no capacity gain. We show that if the MAC channel is modulo-additive, then any error probability, and hence any error exponent, achievable by a linear code for the corresponding single-user channel, is also achievable for the MAC channel. Specifically, for an alphabet of prime cardinality, where linear codes achieve the best known exponents in the single-user setting and the optimal exponent above the critical rate, this performance carries over to the MAC setting. At least at low rates, where expurgation is needed, our approach strictly improves performance over previous results, where expurgation was used at most for one of the users. Even when the MAC channel is not additive, it may be transformed into such a channel. While the transformation is lossy, we show that the distributed structure gain in some "nearly additive" cases outweighs the loss, and thus the error exponent can improve upon the best known error exponent for these cases as well. Finally we apply a similar approach to the Gaussian MAC channel. We obtain an improvement over the best known achievable exponent, given by Gallager, for certain rate pairs, using lattice codes which satisfy a nesting condition.Comment: Submitted to the IEEE Trans. Info. Theor

    Constructive spherical codes on layers of flat tori

    Full text link
    A new class of spherical codes is constructed by selecting a finite subset of flat tori from a foliation of the unit sphere S^{2L-1} of R^{2L} and designing a structured codebook on each torus layer. The resulting spherical code can be the image of a lattice restricted to a specific hyperbox in R^L in each layer. Group structure and homogeneity, useful for efficient storage and decoding, are inherited from the underlying lattice codebook. A systematic method for constructing such codes are presented and, as an example, the Leech lattice is used to construct a spherical code in R^{48}. Upper and lower bounds on the performance, the asymptotic packing density and a method for decoding are derived.Comment: 9 pages, 5 figures, submitted to IEEE Transactions on Information Theor

    Gaussian Multiple and Random Access in the Finite Blocklength Regime

    Get PDF
    This paper presents finite-blocklength achievabil- ity bounds for the Gaussian multiple access channel (MAC) and random access channel (RAC) under average-error and maximal-power constraints. Using random codewords uniformly distributed on a sphere and a maximum likelihood decoder, the derived MAC bound on each transmitter’s rate matches the MolavianJazi-Laneman bound (2015) in its first- and second-order terms, improving the remaining terms to ½ log n/n + O(1/n) bits per channel use. The result then extends to a RAC model in which neither the encoders nor the decoder knows which of K possible transmitters are active. In the proposed rateless coding strategy, decoding occurs at a time n t that depends on the decoder’s estimate t of the number of active transmitters k. Single-bit feedback from the decoder to all encoders at each potential decoding time n_i, i ≤ t, informs the encoders when to stop transmitting. For this RAC model, the proposed code achieves the same first-, second-, and third-order performance as the best known result for the Gaussian MAC in operation

    The Dispersion of Nearest-Neighbor Decoding for Additive Non-Gaussian Channels

    Get PDF
    We study the second-order asymptotics of information transmission using random Gaussian codebooks and nearest neighbor (NN) decoding over a power-limited stationary memoryless additive non-Gaussian noise channel. We show that the dispersion term depends on the non-Gaussian noise only through its second and fourth moments, thus complementing the capacity result (Lapidoth, 1996), which depends only on the second moment. Furthermore, we characterize the second-order asymptotics of point-to-point codes over KK-sender interference networks with non-Gaussian additive noise. Specifically, we assume that each user's codebook is Gaussian and that NN decoding is employed, i.e., that interference from the K−1K-1 unintended users (Gaussian interfering signals) is treated as noise at each decoder. We show that while the first-order term in the asymptotic expansion of the maximum number of messages depends on the power of the interferring codewords only through their sum, this does not hold for the second-order term.Comment: 12 pages, 3 figures, IEEE Transactions on Information Theor
    • …
    corecore