22,204 research outputs found

    Lattice Coding for the Two-way Two-relay Channel

    Full text link
    Lattice coding techniques may be used to derive achievable rate regions which outperform known independent, identically distributed (i.i.d.) random codes in multi-source relay networks and in particular the two-way relay channel. Gains stem from the ability to decode the sum of codewords (or messages) using lattice codes at higher rates than possible with i.i.d. random codes. Here we develop a novel lattice coding scheme for the Two-way Two-relay Channel: 1 2 3 4, where Node 1 and 4 simultaneously communicate with each other through two relay nodes 2 and 3. Each node only communicates with its neighboring nodes. The key technical contribution is the lattice-based achievability strategy, where each relay is able to remove the noise while decoding the sum of several signals in a Block Markov strategy and then re-encode the signal into another lattice codeword using the so-called "Re-distribution Transform". This allows nodes further down the line to again decode sums of lattice codewords. This transform is central to improving the achievable rates, and ensures that the messages traveling in each of the two directions fully utilize the relay's power, even under asymmetric channel conditions. All decoders are lattice decoders and only a single nested lattice codebook pair is needed. The symmetric rate achieved by the proposed lattice coding scheme is within 0.5 log 3 bit/Hz/s of the symmetric rate capacity.Comment: submitted to IEEE Transactions on Information Theory on December 3, 201

    Functional-Decode-Forward for the General Discrete Memoryless Two-Way Relay Channel

    Full text link
    We consider the general discrete memoryless two-way relay channel, where two users exchange messages via a relay, and propose two functional-decode-forward coding strategies for this channel. Functional-decode-forward involves the relay decoding a function of the users' messages rather than the individual messages themselves. This function is then broadcast back to the users, which can be used in conjunction with the user's own message to decode the other user's message. Via a numerical example, we show that functional-decode-forward with linear codes is capable of achieving strictly larger sum rates than those achievable by other strategies

    Distributed Structure: Joint Expurgation for the Multiple-Access Channel

    Full text link
    In this work we show how an improved lower bound to the error exponent of the memoryless multiple-access (MAC) channel is attained via the use of linear codes, thus demonstrating that structure can be beneficial even in cases where there is no capacity gain. We show that if the MAC channel is modulo-additive, then any error probability, and hence any error exponent, achievable by a linear code for the corresponding single-user channel, is also achievable for the MAC channel. Specifically, for an alphabet of prime cardinality, where linear codes achieve the best known exponents in the single-user setting and the optimal exponent above the critical rate, this performance carries over to the MAC setting. At least at low rates, where expurgation is needed, our approach strictly improves performance over previous results, where expurgation was used at most for one of the users. Even when the MAC channel is not additive, it may be transformed into such a channel. While the transformation is lossy, we show that the distributed structure gain in some "nearly additive" cases outweighs the loss, and thus the error exponent can improve upon the best known error exponent for these cases as well. Finally we apply a similar approach to the Gaussian MAC channel. We obtain an improvement over the best known achievable exponent, given by Gallager, for certain rate pairs, using lattice codes which satisfy a nesting condition.Comment: Submitted to the IEEE Trans. Info. Theor

    Quickest Sequence Phase Detection

    Full text link
    A phase detection sequence is a length-nn cyclic sequence, such that the location of any length-kk contiguous subsequence can be determined from a noisy observation of that subsequence. In this paper, we derive bounds on the minimal possible kk in the limit of n→∞n\to\infty, and describe some sequence constructions. We further consider multiple phase detection sequences, where the location of any length-kk contiguous subsequence of each sequence can be determined simultaneously from a noisy mixture of those subsequences. We study the optimal trade-offs between the lengths of the sequences, and describe some sequence constructions. We compare these phase detection problems to their natural channel coding counterparts, and show a strict separation between the fundamental limits in the multiple sequence case. Both adversarial and probabilistic noise models are addressed.Comment: To appear in the IEEE Transactions on Information Theor

    Nomographic Functions: Efficient Computation in Clustered Gaussian Sensor Networks

    Full text link
    In this paper, a clustered wireless sensor network is considered that is modeled as a set of coupled Gaussian multiple-access channels. The objective of the network is not to reconstruct individual sensor readings at designated fusion centers but rather to reliably compute some functions thereof. Our particular attention is on real-valued functions that can be represented as a post-processed sum of pre-processed sensor readings. Such functions are called nomographic functions and their special structure permits the utilization of the interference property of the Gaussian multiple-access channel to reliably compute many linear and nonlinear functions at significantly higher rates than those achievable with standard schemes that combat interference. Motivated by this observation, a computation scheme is proposed that combines a suitable data pre- and post-processing strategy with a nested lattice code designed to protect the sum of pre-processed sensor readings against the channel noise. After analyzing its computation rate performance, it is shown that at the cost of a reduced rate, the scheme can be extended to compute every continuous function of the sensor readings in a finite succession of steps, where in each step a different nomographic function is computed. This demonstrates the fundamental role of nomographic representations.Comment: to appear in IEEE Transactions on Wireless Communication

    Integer-Forcing Source Coding

    Full text link
    Integer-Forcing (IF) is a new framework, based on compute-and-forward, for decoding multiple integer linear combinations from the output of a Gaussian multiple-input multiple-output channel. This work applies the IF approach to arrive at a new low-complexity scheme, IF source coding, for distributed lossy compression of correlated Gaussian sources under a minimum mean squared error distortion measure. All encoders use the same nested lattice codebook. Each encoder quantizes its observation using the fine lattice as a quantizer and reduces the result modulo the coarse lattice, which plays the role of binning. Rather than directly recovering the individual quantized signals, the decoder first recovers a full-rank set of judiciously chosen integer linear combinations of the quantized signals, and then inverts it. In general, the linear combinations have smaller average powers than the original signals. This allows to increase the density of the coarse lattice, which in turn translates to smaller compression rates. We also propose and analyze a one-shot version of IF source coding, that is simple enough to potentially lead to a new design principle for analog-to-digital converters that can exploit spatial correlations between the sampled signals.Comment: Submitted to IEEE Transactions on Information Theor
    • …
    corecore