3,427 research outputs found

    Integer-Forcing Source Coding

    Full text link
    Integer-Forcing (IF) is a new framework, based on compute-and-forward, for decoding multiple integer linear combinations from the output of a Gaussian multiple-input multiple-output channel. This work applies the IF approach to arrive at a new low-complexity scheme, IF source coding, for distributed lossy compression of correlated Gaussian sources under a minimum mean squared error distortion measure. All encoders use the same nested lattice codebook. Each encoder quantizes its observation using the fine lattice as a quantizer and reduces the result modulo the coarse lattice, which plays the role of binning. Rather than directly recovering the individual quantized signals, the decoder first recovers a full-rank set of judiciously chosen integer linear combinations of the quantized signals, and then inverts it. In general, the linear combinations have smaller average powers than the original signals. This allows to increase the density of the coarse lattice, which in turn translates to smaller compression rates. We also propose and analyze a one-shot version of IF source coding, that is simple enough to potentially lead to a new design principle for analog-to-digital converters that can exploit spatial correlations between the sampled signals.Comment: Submitted to IEEE Transactions on Information Theor

    Compute-and-Forward: Harnessing Interference through Structured Codes

    Get PDF
    Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information.Comment: IEEE Trans. Info Theory, to appear. 23 pages, 13 figure

    Optimal Design of Multiple Description Lattice Vector Quantizers

    Full text link
    In the design of multiple description lattice vector quantizers (MDLVQ), index assignment plays a critical role. In addition, one also needs to choose the Voronoi cell size of the central lattice v, the sublattice index N, and the number of side descriptions K to minimize the expected MDLVQ distortion, given the total entropy rate of all side descriptions Rt and description loss probability p. In this paper we propose a linear-time MDLVQ index assignment algorithm for any K >= 2 balanced descriptions in any dimensions, based on a new construction of so-called K-fraction lattice. The algorithm is greedy in nature but is proven to be asymptotically (N -> infinity) optimal for any K >= 2 balanced descriptions in any dimensions, given Rt and p. The result is stronger when K = 2: the optimality holds for finite N as well, under some mild conditions. For K > 2, a local adjustment algorithm is developed to augment the greedy index assignment, and conjectured to be optimal for finite N. Our algorithmic study also leads to better understanding of v, N and K in optimal MDLVQ design. For K = 2 we derive, for the first time, a non-asymptotical closed form expression of the expected distortion of optimal MDLVQ in p, Rt, N. For K > 2, we tighten the current asymptotic formula of the expected distortion, relating the optimal values of N and K to p and Rt more precisely.Comment: Submitted to IEEE Trans. on Information Theory, Sep 2006 (30 pages, 7 figures

    Multiple Description Quantization via Gram-Schmidt Orthogonalization

    Full text link
    The multiple description (MD) problem has received considerable attention as a model of information transmission over unreliable channels. A general framework for designing efficient multiple description quantization schemes is proposed in this paper. We provide a systematic treatment of the El Gamal-Cover (EGC) achievable MD rate-distortion region, and show that any point in the EGC region can be achieved via a successive quantization scheme along with quantization splitting. For the quadratic Gaussian case, the proposed scheme has an intrinsic connection with the Gram-Schmidt orthogonalization, which implies that the whole Gaussian MD rate-distortion region is achievable with a sequential dithered lattice-based quantization scheme as the dimension of the (optimal) lattice quantizers becomes large. Moreover, this scheme is shown to be universal for all i.i.d. smooth sources with performance no worse than that for an i.i.d. Gaussian source with the same variance and asymptotically optimal at high resolution. A class of low-complexity MD scalar quantizers in the proposed general framework also is constructed and is illustrated geometrically; the performance is analyzed in the high resolution regime, which exhibits a noticeable improvement over the existing MD scalar quantization schemes.Comment: 48 pages; submitted to IEEE Transactions on Information Theor
    • …
    corecore