53 research outputs found

    Near-capacity dirty-paper code design : a source-channel coding approach

    Get PDF
    This paper examines near-capacity dirty-paper code designs based on source-channel coding. We first point out that the performance loss in signal-to-noise ratio (SNR) in our code designs can be broken into the sum of the packing loss from channel coding and a modulo loss, which is a function of the granular loss from source coding and the target dirty-paper coding rate (or SNR). We then examine practical designs by combining trellis-coded quantization (TCQ) with both systematic and nonsystematic irregular repeat-accumulate (IRA) codes. Like previous approaches, we exploit the extrinsic information transfer (EXIT) chart technique for capacity-approaching IRA code design; but unlike previous approaches, we emphasize the role of strong source coding to achieve as much granular gain as possible using TCQ. Instead of systematic doping, we employ two relatively shifted TCQ codebooks, where the shift is optimized (via tuning the EXIT charts) to facilitate the IRA code design. Our designs synergistically combine TCQ with IRA codes so that they work together as well as they do individually. By bringing together TCQ (the best quantizer from the source coding community) and EXIT chart-based IRA code designs (the best from the channel coding community), we are able to approach the theoretical limit of dirty-paper coding. For example, at 0.25 bit per symbol (b/s), our best code design (with 2048-state TCQ) performs only 0.630 dB away from the Shannon capacity

    Nested turbo codes for the costa problem

    Get PDF
    Driven by applications in data-hiding, MIMO broadcast channel coding, precoding for interference cancellation, and transmitter cooperation in wireless networks, Costa coding has lately become a very active research area. In this paper, we first offer code design guidelines in terms of source- channel coding for algebraic binning. We then address practical code design based on nested lattice codes and propose nested turbo codes using turbo-like trellis-coded quantization (TCQ) for source coding and turbo trellis-coded modulation (TTCM) for channel coding. Compared to TCQ, turbo-like TCQ offers structural similarity between the source and channel coding components, leading to more efficient nesting with TTCM and better source coding performance. Due to the difference in effective dimensionality between turbo-like TCQ and TTCM, there is a performance tradeoff between these two components when they are nested together, meaning that the performance of turbo-like TCQ worsens as the TTCM code becomes stronger and vice versa. Optimization of this performance tradeoff leads to our code design that outperforms existing TCQ/TCM and TCQ/TTCM constructions and exhibits a gap of 0.94, 1.42 and 2.65 dB to the Costa capacity at 2.0, 1.0, and 0.5 bits/sample, respectively

    Construction and evaluation of trellis-coded quantizers for memoryless sources

    Full text link

    Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding

    Full text link
    We propose computationally efficient encoders and decoders for lossy compression using a Sparse Regression Code. The codebook is defined by a design matrix and codewords are structured linear combinations of columns of this matrix. The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence. It is shown to achieve the optimal distortion-rate function for i.i.d Gaussian sources under the squared-error distortion criterion. For a given rate, the parameters of the design matrix can be varied to trade off distortion performance with encoding complexity. An example of such a trade-off as a function of the block length n is the following. With computational resource (space or time) per source sample of O((n/\log n)^2), for a fixed distortion-level above the Gaussian distortion-rate function, the probability of excess distortion decays exponentially in n. The Sparse Regression Code is robust in the following sense: for any ergodic source, the proposed encoder achieves the optimal distortion-rate function of an i.i.d Gaussian source with the same variance. Simulations show that the encoder has good empirical performance, especially at low and moderate rates.Comment: 14 pages, to appear in IEEE Transactions on Information Theor

    Distributed signal processing using nested lattice codes

    No full text
    Multi-Terminal Source Coding (MTSC) addresses the problem of compressing correlated sources without communication links among them. In this thesis, the constructive approach of this problem is considered in an algebraic framework and a system design is provided that can be applicable in a variety of settings. Wyner-Ziv problem is first investigated: coding of an independent and identically distributed (i.i.d.) Gaussian source with side information available only at the decoder in the form of a noisy version of the source to be encoded. Theoretical models are first established and derived for calculating distortion-rate functions. Then a few novel practical code implementations are proposed by using the strategy of multi-dimensional nested lattice/trellis coding. By investigating various lattices in the dimensions considered, analysis is given on how lattice properties affect performance. Also proposed are methods on choosing good sublattices in multiple dimensions. By introducing scaling factors, the relationship between distortion and scaling factor is examined for various rates. The best high-dimensional lattice using our scale-rotate method can achieve a performance less than 1 dB at low rates from the Wyner-Ziv limit; and random nested ensembles can achieve a 1.87 dB gap with the limit. Moreover, the code design is extended to incorporate with distributed compressive sensing (DCS). Theoretical framework is proposed and practical design using nested lattice/trellis is presented for various scenarios. By using nested trellis, the simulation shows a 3.42 dB gap from our derived bound for the DCS plus Wyner-Ziv framework

    Lossy Compression via Sparse Linear Regression: Performance under Minimum-distance Encoding

    Full text link
    We study a new class of codes for lossy compression with the squared-error distortion criterion, designed using the statistical framework of high-dimensional linear regression. Codewords are linear combinations of subsets of columns of a design matrix. Called a Sparse Superposition or Sparse Regression codebook, this structure is motivated by an analogous construction proposed recently by Barron and Joseph for communication over an AWGN channel. For i.i.d Gaussian sources and minimum-distance encoding, we show that such a code can attain the Shannon rate-distortion function with the optimal error exponent, for all distortions below a specified value. It is also shown that sparse regression codes are robust in the following sense: a codebook designed to compress an i.i.d Gaussian source of variance σ2\sigma^2 with (squared-error) distortion DD can compress any ergodic source of variance less than σ2\sigma^2 to within distortion DD. Thus the sparse regression ensemble retains many of the good covering properties of the i.i.d random Gaussian ensemble, while having having a compact representation in terms of a matrix whose size is a low-order polynomial in the block-length.Comment: This version corrects a typo in the statement of Theorem 2 of the published pape

    Code design for multiple-input multiple-output broadcast channels

    Get PDF
    Recent information theoretical results indicate that dirty-paper coding (DPC) achieves the entire capacity region of the Gaussian multiple-input multiple-output (MIMO) broadcast channel (BC). This thesis presents practical code designs for Gaussian BCs based on DPC. To simplify our designs, we assume constraints on the individual rates for each user instead of the customary constraint on transmitter power. The objective therefore is to minimize the transmitter power such that the practical decoders of all users are able to operate at the given rate constraints. The enabling element of our code designs is a practical DPC scheme based on nested turbo codes. We start with Cover's simplest two-user Gaussian BC as a toy example and present a code design that operates 1.44 dB away from the capacity region boundary at the transmission rate of 1 bit per sample per dimension for each user. Then we consider the case of the multiple-input multiple-output BC and develop a practical limit-approaching code design under the assumption that the channel state information is available perfectly at the receivers as well as at the transmitter. The optimal precoding strategy in this case can be derived by invoking duality between the MIMO BC and MIMO multiple access channel (MAC). However, this approach requires transformation of the optimal MAC covariances to their corresponding counterparts in the BC domain. To avoid these computationally complex transformations, we derive a closed-form expression for the optimal precoding matrix for the two-user case and use it to determine the optimal precoding strategy. For more than two users we propose a low-complexity suboptimal strategy, which, for three transmit antennas at the base station and three users (each with a single receive antenna), performs only 0.2 dB worse than the optimal scheme. Our obtained results are only 1.5 dB away from the capacity limit. Moreover simulations indicate that our practical DPC based scheme significantly outperforms the prevalent suboptimal strategies such as time division multiplexing and zero forcing beamforming. The drawback of DPC based designs is the requirement of channel state information at the transmitter. However, if the channel state information can be communicated back to the transmitter effectively, DPC does indeed have a promising future in code designs for MIMO BCs
    corecore