660 research outputs found

    Approaching Gaussian Relay Network Capacity in the High SNR Regime: End-to-End Lattice Codes

    Get PDF
    We present a natural and low-complexity technique for achieving the capacity of the Gaussian relay network in the high SNR regime. Specifically, we propose the use of end-to-end structured lattice codes with the amplify-and-forward strategy, where the source uses a nested lattice code to encode the messages and the destination decodes the messages by lattice decoding. All intermediate relays simply amplify and forward the received signals over the network to the destination. We show that the end-to-end lattice-coded amplify-and-forward scheme approaches the capacity of the layered Gaussian relay network in the high SNR regime. Next, we extend our scheme to non-layered Gaussian relay networks under the amplify-and-forward scheme, which can be viewed as a Gaussian intersymbol interference (ISI) channel. Compared with other schemes, our approach is significantly simpler and requires only the end-to-end design of the lattice precoding and decoding. It does not require any knowledge of the network topology or the individual channel gains

    Construction of Capacity-Achieving Lattice Codes: Polar Lattices

    Full text link
    In this paper, we propose a new class of lattices constructed from polar codes, namely polar lattices, to achieve the capacity \frac{1}{2}\log(1+\SNR) of the additive white Gaussian-noise (AWGN) channel. Our construction follows the multilevel approach of Forney \textit{et al.}, where we construct a capacity-achieving polar code on each level. The component polar codes are shown to be naturally nested, thereby fulfilling the requirement of the multilevel lattice construction. We prove that polar lattices are \emph{AWGN-good}. Furthermore, using the technique of source polarization, we propose discrete Gaussian shaping over the polar lattice to satisfy the power constraint. Both the construction and shaping are explicit, and the overall complexity of encoding and decoding is O(NlogN)O(N\log N) for any fixed target error probability.Comment: full version of the paper to appear in IEEE Trans. Communication

    Sparse Regression Codes for Multi-terminal Source and Channel Coding

    Full text link
    We study a new class of codes for Gaussian multi-terminal source and channel coding. These codes are designed using the statistical framework of high-dimensional linear regression and are called Sparse Superposition or Sparse Regression codes. Codewords are linear combinations of subsets of columns of a design matrix. These codes were recently introduced by Barron and Joseph and shown to achieve the channel capacity of AWGN channels with computationally feasible decoding. They have also recently been shown to achieve the optimal rate-distortion function for Gaussian sources. In this paper, we demonstrate how to implement random binning and superposition coding using sparse regression codes. In particular, with minimum-distance encoding/decoding it is shown that sparse regression codes attain the optimal information-theoretic limits for a variety of multi-terminal source and channel coding problems.Comment: 9 pages, appeared in the Proceedings of the 50th Annual Allerton Conference on Communication, Control, and Computing - 201

    On Achievable Rate Regions of the Asymmetric AWGN Two-Way Relay Channel

    Full text link
    This paper investigates the additive white Gaussian noise two-way relay channel, where two users exchange messages through a relay. Asymmetrical channels are considered where the users can transmit data at different rates and at different power levels. We modify and improve existing coding schemes to obtain three new achievable rate regions. Comparing four downlink-optimal coding schemes, we show that the scheme that gives the best sum-rate performance is (i) complete-decode-forward, when both users transmit at low signal-to-noise ratio (SNR); (ii) functional-decode-forward with nested lattice codes, when both users transmit at high SNR; (iii) functional-decode-forward with rate splitting and time-division multiplexing, when one user transmits at low SNR and another user at medium--high SNR.Comment: to be presented at ISIT 201

    Integer-Forcing Source Coding

    Full text link
    Integer-Forcing (IF) is a new framework, based on compute-and-forward, for decoding multiple integer linear combinations from the output of a Gaussian multiple-input multiple-output channel. This work applies the IF approach to arrive at a new low-complexity scheme, IF source coding, for distributed lossy compression of correlated Gaussian sources under a minimum mean squared error distortion measure. All encoders use the same nested lattice codebook. Each encoder quantizes its observation using the fine lattice as a quantizer and reduces the result modulo the coarse lattice, which plays the role of binning. Rather than directly recovering the individual quantized signals, the decoder first recovers a full-rank set of judiciously chosen integer linear combinations of the quantized signals, and then inverts it. In general, the linear combinations have smaller average powers than the original signals. This allows to increase the density of the coarse lattice, which in turn translates to smaller compression rates. We also propose and analyze a one-shot version of IF source coding, that is simple enough to potentially lead to a new design principle for analog-to-digital converters that can exploit spatial correlations between the sampled signals.Comment: Submitted to IEEE Transactions on Information Theor
    corecore